X

Proactive insights, news and tips from Oracle WebLogic Server Support. Learn Oracle from Oracle.

Recent Posts

Technical

Three ways to customise the WebLogic MarketPlace Stacks - Part 3

Add a WebLogic pay-as-you-go node to an existing Kubernetes cluster Oracle Marketplace offers a streamlined way to get started with a publisher's software, and through the Marketplace UI you can simply spin up any of these solutions through a simple wizard. This blog is the third chapter in a series that will explore your options to work with Terraform and the Marketplace Stacks and Images of WebLogic as provided on Marketplace.   You will learn how you can add nodes to an existing Kubernetes cluster running WebLogic, using a "pay-as-you-go" compute image that has the license entitlements for running WebLogic.  This allows you to convert an existing, custom setup of WebLogic on Kubernetes into a "Pay-as-you-go" version, bypassing the standard  Marketplace automation. You can read more on the underlying mechanisms of the WebLogic Marketplace stack in my previous articles: In the first blog you could read how to automate the launch of the WebLogic stack as it is provided on Marketplace, using Terraform and a bit of OCI Command Line. In the second blog you saw how to launch an individual Pay-as-you-go VM image with full control of the setup process, omitting the standard Stack provided on Marketplace. Convert a Customer WebLogic setup running on OKE into Pay-as-you-go For customers that already have done the work of setting up their own configuration of WebLogic running on OKE, and have not used the marketplace images, it is now possible to migrate their setup into a Pay-as-you-go UC consuming instance, by simply switching the WLS configuration onto a new Node Pool that is running the UCM licensed (and billed) marketplace image. The basic principle has already been used in Option 2 of this article : obtain the required "Terms and Conditions" agreements using Terraform, download the Marketplace stack to obtain the OCID's of the correct images. But instead of spinning up a compute instance, we will now add a node pool to an existing cluster. Of course you can integrate this logic in your existing automation scripts, allowing you to spin up from scratch a new environment using your existing automation, but now based on UC consuming Node Pools. For the sake of example, I will do this on an existing running Kubernetes cluster. Get the T&C agreements in place First we need to obtain the required T&C agreements and the URL to the WLS for OKE stack you require (this can be Enterprise Edition or Suite). This is basically the exact same operation as demonstrated in the previous examples, but now using a different image name as a filter: # DATA 1 - Get a list of element in Marketplace, using filters, eg name of the stack data "oci_marketplace_listings" "test_listings" { name = ["Oracle WebLogic Server Enterprise Edition for OKE UCM"] compartment_id = var.compartment_ocid } As a result, you will obtain the usual URL to the Stack configuration in the Data 4 element. Download and unzip the stack, and locate the values you need to subscribe to the image : Open the file mp-subscription-variables.tf Locate the lines defining the variable mp_wls_node_pool_image_id, and note the default value of the image OCID. Locale the lines defining the mp_wls_node_pool_listing_id and note down the App Catalog Listing OCID Locate the line containing the mp_wls_node_pool_listing_resource_version, this is a value that looks like : 20.4.1-201103061109 Setting up the agreement for the image As already described in Option 2, you can now use these variables to subscribe to this image : Get the partner image subscription data data "oci_core_app_catalog_subscriptions" "mp_image_subscription" { #Required compartment_id = var.compartment_ocid #Optional listing_id = var.mp_listing_id filter { name = "listing_resource_version" values = [var.mp_listing_resource_version] } } Obtain the Image Agreement using the parameters we just located : #Get Image Agreement resource "oci_core_app_catalog_listing_resource_version_agreement" "mp_image_agreement" { listing_id = var.mp_listing_id listing_resource_version = var.mp_listing_resource_version } Agree to the Terms and Conditions: #Accept Terms and Subscribe to the image, placing the image in a particular compartment resource "oci_core_app_catalog_subscription" "mp_image_subscription" { compartment_id = var.compartment_ocid eula_link = oci_core_app_catalog_listing_resource_version_agreement.mp_image_agreement[0].eula_link listing_id = oci_core_app_catalog_listing_resource_version_agreement.mp_image_agreement[0].listing_id listing_resource_version = oci_core_app_catalog_listing_resource_version_agreement.mp_image_agreement[0].listing_resource_version oracle_terms_of_use_link = oci_core_app_catalog_listing_resource_version_agreement.mp_image_agreement[0].oracle_terms_of_use_link signature = oci_core_app_catalog_listing_resource_version_agreement.mp_image_agreement[0].signature time_retrieved = oci_core_app_catalog_listing_resource_version_agreement.mp_image_agreement[0].time_retrieved timeouts { create = "20m" } } Creating the Node Pool With all the element we have collected, you can now spin up a new node pool in an existing OKE cluster. All you need is the OKE OCID as well as the workernode subnet OCID. Below a sample of the code required to do this : locals { oke_id = "ocid1.cluster.oc1.eu-frankfurt-1.ocid_of_existing OKE" subnet_id = "ocid1.subnet.oc1.eu-frankfurt-1.ocid_of_worker_subnet" } resource "oci_containerengine_node_pool" "K8S_pool1" { cluster_id = local.oke_id compartment_id = var.compartment_ocid kubernetes_version = "v1.18.10" name = "wls_uc_pool" node_shape = var.compute_shape node_config_details { dynamic "placement_configs" { for_each = local.ad_nums2 content { availability_domain = placement_configs.value subnet_id = local.subnet_id } } size = 1 } node_source_details { image_id = data.oci_core_app_catalog_listing_resource_version.test_catalog_listing.listing_resource_id source_type = "IMAGE" } } Please make sure to use the same version for the Node pool as the version of the Kubernetes Cluster you are attaching it to. Rerun the Terraform script with the image_subscription.tf file included and your cluster will be extended with a new node pool. You can now shut down the old node pools, to only run on your new Node pool. An example of the Terraform script can be found in the folder wls_nodepool Pinning your WebLogic Pods to the new Node Pool Alternatively you can use the "pinning" mechanism to only run your WebLogic pods on this new "Pay-as-you-go" infrastructure, while keeping other node pools in your cluster for non-WLS workloads. To achieve this we will be using node labels and the nodeSelector parameter in the WebLogic Domain definition file deployed on the cluster. Label your nodes that are running the Pay-as-you-go WebLogic image: kubectl label nodes 130.61.84.41 licensed-for-weblogic=true Change the definition file of your WebLogic domain to include the nodeSelector parameter in the serverPod section: serverPod: env: [...] nodeSelector: licensed-for-weblogic: true Update your domain definition file on the cluster and your WebLogic pods should start migrating to the labeled nodes !

Add a WebLogic pay-as-you-go node to an existing Kubernetes cluster Oracle Marketplace offers a streamlined way to get started with a publisher's software, and through the Marketplace UI you can simply...

Announcement

Run WebLogic on Azure VMs with Coherence and ELK support

We are delighted to announce a major release for Oracle WebLogic Server (WLS) on Azure Virtual Machines (VMs). The release is jointly developed as part of a broad-ranging partnership between Oracle and Microsoft. Software available on Azure under the partnership includes WebLogic Server, Oracle Linux, and Oracle Database, as well as interoperability between Oracle Cloud Infrastructure (OCI) and Azure. This release covers most common use cases for WebLogic Server on Azure, such as base image, single working instance, clustering, load balancing via App Gateway, database connectivity, integration with Azure Active Directory, caching with Oracle Coherence and consolidated logging via the ELK stack. WebLogic Server is a key component in enabling enterprise Java workloads on Azure. Customers are encouraged to evaluate these solutions for full production usage and reach out to collaborate on migration cases.  Use cases and roadmap The partnership between Oracle and Microsoft was announced in June of 2019. Under the partnership, we announced the initial release of the WebLogic Server on Azure VMs at Oracle OpenWorld 2019. The solutions simplify migration by automating boilerplate provisioning of virtual networks/storage, installing OS/Java resources, setting up WebLogic Server, and configuring integration with key Azure services.     Earlier releases focused on a basic set of use cases such as single working instance and clustering. This release expands features to cover most migration scenarios. Some of the features added include distributed caching via Coherence, consolidated logging via ELK, SSL certificate management, DNS configuration, and support for Oracle HTTP Server (OHS) as a load-balancing option. The database integration feature supports Azure PostgreSQL, Azure SQL, and Oracle Database running on OCI or Azure. The solutions let you choose from a variety of base images including WebLogic Server 12.2.1.3.0 with JDK8u131/251 and Oracle Linux 7.4/7.6, or WebLogic 14.1.1 with JDK11u01 on Oracle Linux 7.6. All base images are also available on Azure on their own. The standalone base images are suitable for customers that require very highly customized deployments. In addition to the WebLogic Server on Virtual Machines solutions, Oracle and Microsoft have provided an initial set of solutions to run WebLogic Server on the Azure Kubernetes Service. Going forward, we will be adding more enhancements to the WebLogic Server on Virtual Machines solutions such as support for Red Hat Enterprise Linux in addition to Oracle Linux. We will also add more robust automation for the WebLogic Server on AKS solutions. The solutions will enable a variety of robust production-ready deployment architectures with relative ease, automating the provisioning of most critical components quickly and allowing customers to focus on business value add. Once the initial provisioning is completed by the solutions, you are free to further customize the deployment including integrating with more Azure services. These offers are all Bring-Your-Own-License. They assume you have already procured the appropriate licenses with Oracle and are properly licensed to run offers in Azure.  Get started Customers interested in WebLogic Server on Azure Virtual Machines should explore the solutions, provide feedback and stay informed of the roadmap. Customers can also take advantage of hands-on help from the engineering team behind these offers. The opportunity to collaborate on a migration scenario is completely free while the offers are under active development.

We are delighted to announce a major release for Oracle WebLogic Server (WLS) on Azure Virtual Machines (VMs). The release is jointly developed as part of a broad-ranging partnership between Oracle...

Oracle WebLogic Server for OCI/OKE March release

We’re pleased to announce the availability of the March release of the Oracle WebLogic offerings available in Oracle Cloud Marketplace. The new features available in this release are:   Simplified backup and replication: OCI provides a feature, called OCI custom images, that allows users to backup or replicate an existing compute instance, with the customizations, software and configurations that have been included in the instance. You can now backup and replicate your Oracle WebLogic For OCI configuration using OCI Compute Custom images. Any compute instance created using Oracle WebLogic Server for OCI release version 21.1.3 or later are enabled to create custom images. These custom images are not exportable to other regions and do not include attached block volumes. To learn more about OCI custom images see the OCI documentation here. Oracle WebLogic Server for OCI Disaster Recovery support for File Storage Service: Prior to this release we have supported the use of the DataBase File Service (DBFS) for ongoing replication of primary WebLogic Server for OCI configurations to standby configurations for disaster recovery purposes.  In this release we additionally support use of the OCI File Storage Services (FSS) for this replication. The Disaster Recovery Guide now includes instructions to use FSS instead of DBFS to configure the staging mounts for WebLogic config replication.   Flexible load balancer support: Take advantage of the new OCI Flexible Load Balancing recently released. Just specify the minimum and maximum bandwidth, and the load balancer will automatically be adapted to your needs within the thresholds you have specified.   Support for existing OKE cluster:  In prior releases, a new OKE cluster was always created when a new Oracle WebLogic Server for OKE configuration was deployed. Now you can deploy new Oracle WebLogic Server for OKE configurations and target an existing OKE cluster for deployment. All you need to do is to specify the VCN’s OCID of the OKE cluster you want to reuse and your existing OKE clusters OCID.   Visit the What’s new web page to see all latest features available: Oracle WebLogic for OCI What’s new web page. Oracle WebLogic for OKE What’s new web page. If you want to learn more about Oracle WebLogic offerings available in Oracle Cloud Marketplace, please visit the following blogs: Announcing Oracle WebLogic Server for Oracle Cloud Infrastructure Oracle WebLogic Server for OKE Now Available on Oracle Cloud Marketplace Disaster Recovery in Oracle WebLogic Server for Oracle Cloud Infrastructure Use your Universal Credits with Oracle WebLogic Server for OKE Three use-cases to automate the creation of WebLogic Marketplace Stacks Three ways to customize the WebLogic Marketplace Stacks - Part 2  

We’re pleased to announce the availability of the March release of the Oracle WebLogic offerings available in Oracle Cloud Marketplace. The new features available in this release are:   Simplified...

Announcement

Run Oracle WebLogic Server on the Azure Kubernetes Service

We are delighted to announce the release of solutions to run Oracle WebLogic Server (WLS) on the Azure Kubernetes Service (AKS) as part of a broad-ranging partnership between Microsoft and Oracle. The partnership includes joint support for a range of Oracle software running on Azure, including Oracle WebLogic, Oracle Linux, and Oracle DB, as well as interoperability between Oracle Cloud Infrastructure (OCI) and Azure. WLS is a key component in enabling enterprise Java workloads on Azure. This release certifies that WebLogic is fully enabled to run on AKS and includes a set of instructions, samples, and scripts intended to make it easy to get started with production ready deployments. Evaluate the solutions for full production usage and reach out to collaborate on migration cases. Solution Details and Roadmap WLS on Azure Linux Virtual Machines solutions were announced last September covering several important use cases, such as base image, single working instance, clustering, load-balancing via Azure App Gateway, database integration, and security via Azure Active Directory. This current release enables basic support for running WebLogic domains on AKS reliably through the WebLogic Kubernetes Operator, offering a wider set of options for deploying WLS on Azure. WebLogic Server domains are fully enabled to run on Kubernetes via the WebLogic Kubernetes Operator. The Operator simplifies the management and operation of WebLogic domains and deployments on Kubernetes by automating manual tasks and adding additional operational reliability features. Alongside Microsoft, the WebLogic team has tested, validated and certified that the Operator runs well on AKS. Beyond certification and support, Oracle and Microsoft provide detailed instructions, scripts, and samples for running WebLogic Server on AKS. These solutions, incorporated into the Operator itself, are designed to make production deployments as easy and reliable as possible. The WLS on AKS solutions allow a high degree of configuration and customization. The solutions will work with any WLS version that supports the Operator, such as 12.2.1.3 and 12.2.1.4, and use official WebLogic Server Docker images provided by Oracle. Failover is available via Azure Files accessed through Kubernetes persistent volume claims, and Azure Load Balancers are supported when provisioned using a Kubernetes Service type of 'LoadBalancer'.   The solutions enable a wide range of production-ready deployment architectures with relative ease, and you have complete flexibility to customize your deployments. After deploying your applications, you can take advantage of a range of Azure/OCI resources for additional functionality.   The solutions enable custom images with your domain inside a Docker image, deployed through the Azure Container Registry (ACR). The solutions can also support deploying the domain outside the Docker image and using the standard Docker images from Oracle as-is. Further ease-of-use and Azure service integrations will be possible in the coming months via Marketplace offerings mirroring the WLS on Azure Virtual Machines solutions.  These offers are Bring-Your-Own-License. They assume you have already procured the appropriate licenses with Oracle and are properly licensed to run WLS on Azure. The solutions themselves are available free of charge as part of the Operator. Get started with WLS on AKS Explore the solutions, provide feedback, and stay informed of the roadmap. You can also take advantage of hands-on help from the engineering team behind these offers. The opportunity to collaborate on a migration scenario is completely free while the offers are under active development.

We are delighted to announce the release of solutions to run Oracle WebLogic Server (WLS) on the Azure Kubernetes Service (AKS) as part of a broad-ranging partnership between Microsoft and Oracle. The...

Announcement

The NEW WebLogic Remote Console

We are very excited to announce the release of the WebLogic Remote Console (remote console).  The remote console is a modern, lightweight, open-source console that you can use to manage and monitor your WebLogic Server 12.2.1.3, 12.2.1.4 and 14.1.1 domains.  It is offered as an alternative to the WebLogic Server Administration Console (administration console) web application, which Oracle continues to support. In the remote console GitHub project https://github.com/oracle/weblogic-remote-console you will find the “How to get Started” guide and documentation. The remote console is an application that runs on your local desktop and can connect to a WebLogic Server domain running in a physical or virtual machine, in a container, in Kubernetes, or in the Oracle Cloud. You simply start the console application, display the console UI in your browser, and connect to the Administration Server of the WebLogic Server domain using WebLogic Server REST APIs.  The remote console is a new option for managing WebLogic Server that provides a number of key benefits: It starts up fast The remote console application is built on Helidon, a set of Java libraries for building lightweight applications and microservices, including microservices that interact with WebLogic Server.  Go to helidon.io for more information. It provides a modern look and feel for WebLogic administrators See the screenshot below. The UI has been redesigned using the Oracle Alta UI Design system and the Oracle Redwood theme included with Oracle JavaScript Extension ToolKit (JET). While basic navigation is similar to the current administration console, monitoring and configuration information is more clearly separated. Common UI features such as shopping carts are leveraged.    You can use a single remote console to connect to multiple WebLogic Server domains The remote console can connect to the administration server in a WebLogic Server domain, with the credentials supplied by the user, using WebLogic REST APIs.  The user connects to a single domain at any point in time, but can also connect, over time or at different points in time, to multiple WebLogic Server domains, running in physical machines, VMs, containers or Kubernetes to perform administration tasks. Smaller WebLogic Server footprint If you use the remote console, you don’t need to run the administration console on WebLogic Server instances, and you don’t need to install it.  To install WebLogic Server without the administration console, use WebLogic Server “slim” installers available on OTN or OSDC. Slim installers reduce the size of WebLogic Server downloads, installations, Docker images and Kubernetes pods.  For example, a WebLogic Server 12.2.1.4 slim installer download is about 180 MB.   The remote console is fully supported in production with WebLogic Server 12.2.1.3, 12.2.1.4, and 14.1.1.  For more information on support for the remote console please refer to My Oracle Support Document 2759056.1. As stated above, the remote console can be used to manage deployments on physical machines or VMs without containers or Kubernetes present.   However, because the WebLogic Remote Console is implemented in open source, and because we expect the remote console to be attractive for WebLogic Server Kubernetes users, we are including the remote console in the group of open source tools that are part of the WebLogic Kubernetes ToolKit.   We continue to provide tooling to make it simple to configure, monitor, provision, deploy, and manage WebLogic domains with the goal of providing the greatest degree of flexibility for where these domains can run.  We hope you will use the WebLogic Remote Console and we look forward to your feedback. 

We are very excited to announce the release of the WebLogic Remote Console (remote console).  The remote console is a modern, lightweight, open-source console that you can use to manage and monitor...

Technical

Three ways to customise the WebLogic MarketPlace Stacks - Part 2

Develop your own Stack to launch a WebLogic Pay-as-you-go Image Oracle Marketplace offers a streamlined way to get started with a publisher's software, and through the Marketplace UI you can simply spin up any of these solutions through a simple wizard. This blog is the second chapter in a series that will explore your options to work with Terraform and the Marketplace Stacks and Images of WebLogic as provided on the Marketplace, offering you the ability to spin up your own customised stacks using the "Pay-as-you-go" consumption of WebLogic licenses. You will learn how to launch an individual Pay-as-you-go image with full control of the setup process, omitting the standard Stack provided on Marketplace. In the first blog you could read how to automate the launch of the WebLogic stack as it is provided on Marketplace, using Terraform and a bit of OCI Command Line. In a third article I will be adding a WebLogic Pay-as-you-go Node Pool to an existing OKE cluster, thus allowing to easily switch an existing custom setup of WebLogic running on OKE into a pay-as-you-go type of consumption Launch a Pay-as-you-go image In the first article of this series you learned how to use the normal stack with the out-of-the-box automation provided by Oracle. But what if you want to fully customize the install, but still use an instance that has a Pay-as-you-go license entitlement for WebLogic ? Easy ! Using the image information included in the stack definition you can obtain via the oci_marketplace_listing_package Data field, you can easily spin up any configuration you fancy, from a single image up to much more complex set-ups. To understand the basics on obtaining the necessary information from Marketplace, see the first blog in this series. Locating 3 important parameters in the stack definition To be able to do this operation you need to locate 3 hard-coded parameters inside the Stack definition. These parameters will change with each new version, so each initial setup of an automation will need to be done manually. Ideally the required parameters should be provided in the stack definition itself, which would allow you to automate this process entirely [ ==> Enhancement request addressed to Product Management, coming soon !] After running the Terraform script from Article 1, download the stack zip file (a filename looking like this: https://objectstorage.us-ashburn-1.oraclecloud.com/n/marketplaceprod/b/oracleapps/o/orchestration%2F85315320%2Fwlsoci-resource-manager-ee-ucm-mp-10.3.6.0.211714-20.3.3-201018183753.zip), and unzip the file. Open the file variables.tf Locate the lines defining the variable instance_image_id, and note the default value of the OCID. Open the file mp-variables.tf Locale the lines defining the mp_listing_id and note down the App Catalog Listing OCID Locate the line containing the mp_listing_resource_version, this is a value that looks like : 20.4.1-201103061109 Subscribing to the image Although we already subscribed to the WebLogic Marketplace Stack Listing, we now need to subscribe (also) to this specific image in order to be abl to use it in our tenancy and compartment. This can be achieved by running the following commands: Get the partner image subscription data data "oci_core_app_catalog_subscriptions" "mp_image_subscription" { #Required compartment_id = var.compartment_ocid #Optional listing_id = var.mp_listing_id filter { name = "listing_resource_version" values = [var.mp_listing_resource_version] } } Obtain the Image Agreement using the parameters we just located : #Get Image Agreement resource "oci_core_app_catalog_listing_resource_version_agreement" "mp_image_agreement" { listing_id = var.mp_listing_id listing_resource_version = var.mp_listing_resource_version } Agree to the Terms and Conditions: #Accept Terms and Subscribe to the image, placing the image in a particular compartment resource "oci_core_app_catalog_subscription" "mp_image_subscription" { compartment_id = var.compartment_ocid eula_link = oci_core_app_catalog_listing_resource_version_agreement.mp_image_agreement[0].eula_link listing_id = oci_core_app_catalog_listing_resource_version_agreement.mp_image_agreement[0].listing_id listing_resource_version = oci_core_app_catalog_listing_resource_version_agreement.mp_image_agreement[0].listing_resource_version oracle_terms_of_use_link = oci_core_app_catalog_listing_resource_version_agreement.mp_image_agreement[0].oracle_terms_of_use_link signature = oci_core_app_catalog_listing_resource_version_agreement.mp_image_agreement[0].signature time_retrieved = oci_core_app_catalog_listing_resource_version_agreement.mp_image_agreement[0].time_retrieved timeouts { create = "20m" } } Creating the Compute instance Finally we are able to start using the image, using the parameter data.oci_core_app_catalog_listing_resource_version.test_catalog_listing.listing_resource_id as the image reference to create : resource "oci_core_instance" "MyWLS" { compartment_id = var.compartment_ocid display_name = var.hostname_1 shape = var.sql_shape create_vnic_details { subnet_id = var.my_subnet_ocid display_name = "jle-wls" hostname_label = "jle-wls" } source_details { source_id = data.oci_core_app_catalog_listing_resource_version.test_catalog_listing.listing_resource_id source_type = "image" } lifecycle { ignore_changes = [ source_details[0].source_id, ] } } A sample terraform configuration can be found in the folder wls_image of this repository, you just need to fill in your authentication info and tenancy ocid's to try this out. Attention : You need to run this script 2 times : a first time to obtain the stack parameters, with the file image_subscription.tff not taken into account (Terraform will ignore the .tff files) then download the stack and fill in the 3 missing parameters in the file image_subscription.tff, then rename the file to image_subscription.tf for terraform to take it into account Now run your terraform apply a second time, and the image will be created. Next read Make sure to check out my follow-up article on this topic to be published in Februari: How to add Kubernetes node pools consuming the UC flavor of WebLogic to an existing, customer-build setup of WebLogic on Kubernetes.

Develop your own Stack to launch a WebLogic Pay-as-you-go Image Oracle Marketplace offers a streamlined way to get started with a publisher's software, and through the Marketplace UI you can simply...

The WebLogic Server

Migrating from Generic Data Source to Active GridLink

Earlier, I wrote an article about how to migrate from a Multi Data source (MDS) for RAC connectivity to Active GridLink (AGL). This is needed to move from the older datasource technology to the newer technology, both supporting Oracle RAC. The information is now in the public documentation set at Migrating from Multi Data Source to Active GridLink . There are also many customers that are growing up from a standalone database to an Oracle RAC cluster. In this case, it’s a migration from a GENERIC datasource to an AGL datasource. This migration is pretty simple. No changes should be required to your applications.  A standard application looks up the datasource in JNDI and uses it to get connections.  The JNDI name won’t change. The only changes necessary should be to your configuration and the necessary information is generally provided by your database administrator.   The information needed is the new URL and optionally the configuration of Oracle Notification Service (ONS) on the RAC cluster. The latter is only needed if you are running with - an 11g driver or 11g database. Auto-ONS depends on protocol flowing between the driver and the database server and this feature was added in 12c. - pre-WLS 12.1.3. Auto-ONS is supported starting in WLS 12.1.3. - an Oracle wallet with SSL communications. Configuration of the wallet requires also configuring the ONS information. - complicated ONS topology. In general, auto-ONS can figure out what you need but there are cases where you need to specify it precisely. In WLS 12.2.1, an extension of the ONS configuration allows for specifying the exact topology using a property node list. See ONS Client Configuration for more information. The URL and ONS attributes are configurable but not dynamic. That means that the datasource will need to be shutdown and restarted after the change. The simplest way to do this is to untarget the datasource, make the changes, and then re-target the datasource. The recommended approach to migrate from a GENERIC to AGL datasource is to use WLST. The URL must be changed in the JDBCDriverParams object. The new JDBCOracleParams object (it generally doesn’t exist for a GENERIC datasource) needs to have FAN enabled set to true and optionally set the ONS information. The following is a sample WLST script with the new values hard-coded. You could parameterize it and make it more flexible in handling multiple datasources. If you are using an Oracle wallet for ONS, that needs to be added to the JDBCOracleParams object as well. # java weblogic.WLST file.py import sys, socket, os hostname = socket.gethostname() datasource="JDBC Data Source-0" connect("weblogic","welcome1","t3://"+hostname+":7001") edit() startEdit() cd("/JDBCSystemResources/" + datasource ) targets=get("Targets") set("Targets",jarray.array([], ObjectName)) save() activate() startEdit() cd("/JDBCSystemResources/" + datasource + "/JDBCResource/"+ datasource + "/JDBCDriverParams/"+ datasource ) set("Url","jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=("+ "ADDRESS=(PROTOCOL=TCP)(HOST=dbhost)(PORT=1521)))" + "(CONNECT_DATA=(SERVICE_NAME=otrade)))") cd("/JDBCSystemResources/" + datasource + "/JDBCResource/"+ datasource +"/JDBCOracleParams/" + datasource ) set("FanEnabled","true") set("OnsNodeList","dbhost:6200") # The following is for WLS 12.1.2 and 12.1.3 if not setting FanEnabled true, which is not recommended #set("ActiveGridlink","true") # The following is for WLS 12.2.1 and later #cd("/JDBCSystemResources/" + datasource + "/JDBCResource/"+ # datasource ) #set("DatasourceType", "AGL") save() activate() startEdit() cd("/JDBCSystemResources/" + datasource ) set("Targets", targets) save() activate() In WLS 12.1.2 and 12.1.3, there is an explicit ActiveGridlink flag that can be used to identify an AGL datasource, even if FanEnabled is not set to true (which is not recommended) and auto-ONS is used (12.1.2 is the first release in which auto-ONS is supported). In the script above, uncomment the line to set it only if FanEnabled and OnsNodeList are not set. Starting in WLS 12.2.1, there is an explicit datasource type at the JDBCResource level. If that is set to GENERIC, you must re-set it using set("DatasourceType","AGL"). In the script above, uncomment the lines to set it.  In this case, the ActiveGridlink flag is not necessary. In the administrative console, the database type is read-only and there is no mechanism to change the database type. You can try to get around this by setting the URL, FAN Enabled box, and ONS information. However, in 12.2.1 there is no way to re-set the Datasource Type in the console and that value overrides all others.  So it's recommended to use WLST.    

Earlier, I wrote an article about how to migrate from a Multi Data source (MDS) for RAC connectivity to Active GridLink (AGL). This is needed to move from the older datasource technology to the newer...

The WebLogic Server

WebLogic Server 14 in Eclipse IDE for Java EE Developers

This article describes how to integrate WebLogic Server 14.1.1.0.0 running on Java SE 11 in the latest supported version of Eclipse IDE for Java EE Developers with Oracle Enterprise Pack for Eclipse (OEPE) 12.2.1.0. You need to start by getting all of the pieces - Java SE Development Kit, WebLogic Server, Eclipse IDE, and Oracle Enterprise Pack for Eclipse. Go to https://www.oracle.com/java/technologies/javase-jdk11-downloads.html to download Java SE Development kit (neither WLS nor Eclipse come with it).  Java SE 8 is also supported with WLS 14.1.1.0.0 but we want to use the latest features.  Accept the license agreement, download the binary file, and install it on your computer.  Set your JAVA_HOME to the installation directory and add $JAVA_HOME/bin to your PATH.  On Windows, this would look like set PATH=d:\java-11\bin;%PATH% set JAVA_HOME=d:\java-11 Next get and install a copy of WebLogic Server (WLS).  The latest versions of WLS that are currently supported in OEPE are WLS 14.1.1.0.0 (Java SE 8 or 11 and Java EE 8) and WLS 12.2.1.4.0 (Java SE 8 and Java EE 7). Go to the standard WLS download page at Oracle Fusion Middleware Software Downloads  and download the Generic Installer for 14.1.1.0.  If you are running on Windows, your command window to run the jar command will need to be running as the Administrator.  unzip fmw_14.1.1.0.0_wls_lite_Disk1_1of1.zip java -jar fmw_14.1.1.0.0_wls_lite_generic.jar You can take all of the default values.  You will need to specify an installation directory.  You might want to install the examples.  On the last page, make sure the “Automatically Launch the Quickstart Configuration Wizard” is clicked to create the domain so we can use it in this procedure.  In the Configuration Wizard, take the defaults.  You may want to change the domain directory. Enter a password and click on Create. Download “Eclipse IDE for Java EE Developers” from http://www.eclipse.org/downloads/eclipse-packages/ and unzip it.  The latest version that OEPE has been tested with is 2020-06.  unzip eclipse-jee-2020-06-R-win32-x86_64.zip Change to the Eclipse installation directory and run eclipse. Select a directory as a work space and optionally select to use this as the default.  You can close the Welcome screen so we can get to work. If you are running behind a firewall and need to configure a proxy server to download files from an external web site, select the Preferences menu item under the Window menu, expand the General node in the preferences tree and select Network Connections, change the drop-down to Manual, edit the http and https entries to provide the name of the proxy server and the port. There are several ways to install OEPE.  For this article, we will install OEPE from within Eclipse. Use this procedure to install OEPE from a repository in the Oracle Technology Network.  From the Eclipse main menu, select Help, and then Install New Software.  Click Add to add a new update site.  Enter the "Name" Oracle and enter the "Location" http://download.oracle.com/otn_software/oepe/12.2.1.10/photon/repository/ .  Then click "Add". In the software list, select the subcomponents you want (Documentation, ADF, MAF, and/or Tools), and then click Next.  The following figure expands on the available tools. Confirm information presented on the Install Details, and then click Next. Review licenses on the Review Licenses page and click Finish. The installation will continue in the background.  Click on the progress bar in the lower right order to wait for it to complete.  Eventually it will complete and ask if you want to restart Eclipse.   Eclipse needs to restart to adopt the changes.  Click on the Windows menu item, select Show View, and select the Servers view.  Click on the link “No servers are available.  Click this link to create a new server”, expand Oracle, and select Oracle WebLogic Server.  If you want to access the server remotely, you will need to enter the computer name; otherwise using localhost is sufficient for local access.  Click Next. On the next screen, browse to the directory name where you installed WLS 14.1.1.0 and then select the wlserver subdirectory (i.e., WebLogic home is always the wlserver subdirectory under the installation directory).  Eclipse will automatically put in the value of JAVA_HOME for the “Java home” value.  Click Next. On the next screen, browse to the directory where you created the domain using the WLS Configuration Wizard (you can also click on the button to select from known domains).  Click Next and Finish. You can double-click on the new server entry that you just created to bring up an overview window for the server.  Click on “Open launch configuration” to configure any options that you want while running the server and then click OK. Back in the server view, right click on the server entry and select "Start" to boot the server. A Console window will open to display the server log output and eventually you will see that the server is in RUNNING mode. That covers the logistics of getting everything installed to run WLS in Eclipse.  Now you can create your project and start your development.  Let’s pick a Dynamic Web Project from the "File" -> "New" options. The project will automatically be associated with the server runtime that you just set up.  For example, selecting a dynamic web project will display a window like the following where the only value to provide is the name of the project. There are many tutorials on creating projects within Eclipse, once you get the tools set up.

This article describes how to integrate WebLogic Server 14.1.1.0.0 running on Java SE 11 in the latest supported version of Eclipse IDE for Java EE Developers with Oracle Enterprise Pack for Eclipse...

Technical

Three use-cases to automate the creation of WebLogic MarketPlace Stacks

Using Terraform to launch the WebLogic Marketplace Stack - Part 1 Oracle Marketplace offers a streamlined way to get started with a publisher's software, and through the Marketplace UI you can simply spin up any of these solutions through a simple wizard.  But under the covers Marketplace offers two types of publishing mechanisms: using a simple Image that can be launched on a Compute instance, for more complex configurations employing a Stack, which represent a group of Oracle Cloud Infrastructure resources that you can act on as one installation. Using Terraform to launch a Marketplace image is one thing, and a few blogs have already been written on that topic. But what about automating the launch of a Marketplace Stack, in particular a "Pay-as-you-go" stack like WebLogic ? Oracle WebLogic Server for Oracle Cloud Infrastructure (OCI) enables provisioning of WebLogic domains on OCI through the use of a "stack", automating the setup of the various components required to run your Weblogic domain: network, load balancer, compute instances, or even a Kubernetes cluster.  If you want to integrate the creation of a WebLogic domains as part of a larger automation on your Cloud Tenancy, you need to understand the principles of these Marketplace Stacks and how to access the corresponding images. This article is the first in a series that will explore several options to work with Terraform and the Stacks and Images of WebLogic as provided on the Marketplace, offering you the ability to spin up your own customized stacks using the "Pay-as-you-go" consumption of WebLogic licenses. In this first article, I will focus on how you can automate the launch of the WebLogic stack as it is provided on Marketplace, using Terraform and a bit of OCI Command Line. In a second article, I will detail how you can omit the provided automation of the stack, and spin up a UC consuming instance as part of a custom configuration of WLS you might already have And finally in a third article I'll explain how you can add Kubernetes node pools consuming the UC flavor of WebLogic to an existing, customer-build setup of WebLogic on Kubernetes. Obtaining the required data from Marketplace Before you can start creating resources from Marketplace, you need to locate the ID's of the correct Image or Stack you want to launch. This can be done through a series of Terraform Data Sources that are available. Because the names of the data elements are long and very similar, I tagged them with Data 1, Data 2, ... labels for easy reference. To follow this example on your tenancy, a sample terraform configuration can be found on Github, you just need to fill in your own authentication info and tenancy ocid's.   You can also use the Cloud Shell, which offers an out-of-the-box setup of terraform, available on your Oracle Cloud Tenancy homepage - use the >_ icon circled in red to launch it Data 1 - List the Marketplace Images available You can use the oci_marketplace_listings Data Source element to get a full list of all entries in marketplace. Please note these entries are called Listings in the interface. Using the name filter you can find the entry you are interested in. To find the exact name of an image, simplest trick is to go throught the normal Marketplace wizard and accept the T&C manually, this will create an Marketplace Agreement entry in your compartment with the exact name you need. # DATA 1 - Get a list of element in Marketplace, using filters, eg name of the stack data "oci_marketplace_listings" "test_listings" { name = ["Oracle WebLogic Server Enterprise Edition UCM"] compartment_id = var.compartment_ocid } Alternatively you can omit the name filter and display the full list of entries. To show these, use below output command: # DATA 1 : List of entries in Marketplace output "data_1_oci_marketplace_listings" { sensitive = false value = data.oci_marketplace_listings.test_listings } Data 2 - Details of a single Marketplace Listing Now that you've located the entry you are interested in, you can get more details on this entry with the oci_marketplace_listing Data Source element. Notice the singular Data 2 listing as opposed to the initial plural Data 1 listings. # DATA 2 - Get details cf the specific listing you are interested in data "oci_marketplace_listing" "test_listing" { listing_id = data.oci_marketplace_listings.test_listings.listings[0].id compartment_id = var.compartment_ocid }   Data 3 - List and filter the available versions of the stack A stack will probably be offering a series of versions. For example the WebLogic Stack is available in version 10.3.6, 12.2.1.3, 12.2.1.4 and more. The oci_marketplace_listing_packages Data Source allows you to list and filter the version you want, using either an explicit version or to use the default_package_version of the stack as provided in the Data 2 element oci_marketplace_listing.default_package_version (in comment in the code below) # DATA 3 - Get the list of versions for the specific entry (11.3, 12.2.1, ....) data "oci_marketplace_listing_packages" "test_listing_packages" { #Required listing_id = data.oci_marketplace_listing.test_listing.id #Optional compartment_id = var.compartment_ocid package_version = "WLS 10.3.6.0.200714.05(11.1.1.7)" #package_version = data.oci_marketplace_listing.test_listing.default_package_version } Data 4 - Get details about a specfic version To get the details of your chosen version, we will use the oci_marketplace_listing_package Data Source element. Again, notice the use of the singular package. # DATA 4 - Get details about a specfic version data "oci_marketplace_listing_package" "test_listing_package" { #Required listing_id = data.oci_marketplace_listing.test_listing.id package_version = data.oci_marketplace_listing_packages.test_listing_packages.package_version #Optional compartment_id = var.compartment_ocid } Of particular interest is the resource_link element: this contains the URL to download the actual Terraform Stack of the Marketplace image ! You can visualize this element by including the below output element. # DATA 4 : Single version of an entry (11g) output "DATA_4_oci_marketplace_listing_package" { sensitive = false value = data.oci_marketplace_listing_package.test_listing_package.resource_link } As a result, you will get a URL that looks like : https://objectstorage.us-ashburn-1.oraclecloud.com/n/marketplaceprod/b/oracleapps/o/orchestration%2F85315320%2Fwlsoci-resource-manager-ee-ucm-mp-10.3.6.0.211714-20.3.3-201018183753.zip Download this file, either to your local PC using a brower, or into the Cloud Shell with a curl command. Accepting the License Agreement Before you can run the stack you downloaded, you first need to accept the Terms and Conditions associated with the software of your choice. Do do this, you first need to locate the correct agreement data using the oci_marketplace_listing_package_agreements, then create your agreement with the reqource type oci_marketplace_listing_package_agreement, and finally accept the Terms and Conditions throught the creation of a oci_marketplace_accepted_agreement. After completing this last step you can see your agreement in the Marketplace Accepted Agreements list. # DATA 5 - agreement for a specific version data "oci_marketplace_listing_package_agreements" "test_listing_package_agreements" { #Required listing_id = data.oci_marketplace_listing.test_listing.id package_version = data.oci_marketplace_listing_packages.test_listing_packages.package_version #Optional compartment_id = var.compartment_ocid } # RESOURCE 1 - agreement for a specific version resource "oci_marketplace_listing_package_agreement" "test_listing_package_agreement" { #Required agreement_id = data.oci_marketplace_listing_package_agreements.test_listing_package_agreements.agreements[0].id listing_id = data.oci_marketplace_listing.test_listing.id package_version = data.oci_marketplace_listing_packages.test_listing_packages.package_version } # RESOURCE 2 - Accepted agreement resource "oci_marketplace_accepted_agreement" "test_accepted_agreement" { agreement_id = oci_marketplace_listing_package_agreement.test_listing_package_agreement.agreement_id compartment_id = var.compartment_ocid listing_id = data.oci_marketplace_listing.test_listing.id package_version = data.oci_marketplace_listing_packages.test_listing_packages.package_version signature = oci_marketplace_listing_package_agreement.test_listing_package_agreement.signature } A sample terraform configuration can be found on Github, you just need to fill in your authentication info and tenancy ocid's to try this out. Running the default stack with Resource Manager You just collected all the required elements to run your stack with resource manager ! Unfortunately, the Terraform OCI Adaptor does not support the creation of a Stack nor a Job, only to list these elements. To complete the automation, we'll switch to the OCI CLI to create the stack and run the Apply action. Once the Create Stack and create Job will have been added to the Terraform OCI provider this part can be included in the Terraform script we used so far. Create the Stack Below you can see the command to create the stack, using the zip file of the Stack definition you previously downloaded, and the target compartment OCID. oci resource-manager stack create --config-source stack.zip --compartment-id ocid1.compartment.oc1..your_compartment_ocid Next you need to set all the parameters of the stack, as these would normally be asked interactively when staring a new Job. Easiest way to do this is using a JSON formatted file with all elements included. Below you can see a sample of the file: { "compartment_ocid": "ocid1.compartment.oc1..your_compartment_ocid", "region": "eu-frankfurt-1", "tenancy_ocid": "ocid1.tenancy.oc1..your_tenancy_ocid", "wls_node_count": "2", "wls_admin_password_ocid": "ocid1.vaultsecret.oc1.eu-frankfurt-1.your_secret_ocid", "use_advanced_wls_instance_config": "false", "vcn_strategy": "Create New VCN", "add_load_balancer": "true", "lb_shape": "100Mbps", "is_idcs_selected": "false", "create_policies": "true", "add_JRF": "false", "configure_app_db": "false", "defined_tag": "", "defined_tag_value": "", "free_form_tag": "", "free_form_tag_value": "", "network_compartment_id": "ocid1.compartment.oc1..your_network_compartment_ocid", "subnet_strategy_new_vcn": "Create New Subnet", "wls_subnet_cidr": "10.0.3.0/24", "lb_subnet_1_cidr": "10.0.4.0/24", "service_name": "jlewls", "instance_shape": "VM.Standard2.1", "ssh_public_key": "ssh-rsa AAAAB3NzaC1yc2EA...your_public_key...FEkVLdsdfgsdfgdfsg5ES1 ATPKey", "wls_vcn_name": "wlsvcn" } Please note there are more parameters available, for example when using the JRF type of installation including a database, or for integrating with IDCS. You can find the details in the README file included in the stack you downloaded. You can now update your job with the above parameters using the below update command: oci resource-manager stack update --stack-id ocid1.ormstack.oc1.eu-frankfurt-1.aaaaaaaa34p65jokvrtsa5q57t7mnxvo22abxm43fzgosfoxw3zws7yhifwa --variables file://vars.json Create the Apply job Next you need to create the apply job to trigger the execution of the stack creation. oci resource-manager job create \ --stack-id ocid1.ormstack.oc1.eu-frankfurt-1.aaaaaaaa34p65jokvrtsa5q57t7mnxvo22abxm43fzgosfoxw3zws7yhifwa \ --operation APPLY \ --apply-job-plan-resolution '{"isAutoApproved": true }' And voila, you just launched the creation of a WebLogic stack using terraform and the OCI CLI only ! Next reads Make sure to check out my follow-up articles on this topic: How to spin up a UC consumping instance as part of a custom configuration of WLS you might already have, omitting the standard Marketplace automation but using pay-as-you-go. How to add Kubernetes node pools consuming the UC flavor of WebLogic to an existing, customer-build setup of WebLogic on Kubernetes.

Using Terraform to launch the WebLogic Marketplace Stack - Part 1 Oracle Marketplace offers a streamlined way to get started with a publisher's software, and through the Marketplace UI you can simply...

JMS JDBC Store Performance on Oracle RAC

Much performance testing has been done in the area of JMS running on a JDBC store on an Oracle RAC Cluster. The goal of this article is to point to some existing documentation, point out a new, related feature introduced in WLS 12.1.3, and summarize the various approaches. First, let me point out suggestions in optimization of the Oracle database table that is used for the JMS backing store. The current JMS documentation proposes using a reverse index when enabling "I/O Multi-threading", which in turn is only recommended for heavy loads. If you have licensed the Oracle database partitioning option, you can use global hash partition indexes for the table. This cuts down the contention on the index, reduces waiting on the global cache buffer, and can significantly improve the response time of the application. Partitioning works well in all cases, some of which will not see significant improvements with a reverse index. See http://www.oracle.com/technetwork/database/availability/maa-fmw-soa-racanalysis-427647.pdf (this document also has some interesting comments about pool and cache size). A second recommendation is to use secure files to improve performance, make storage more efficient, and ease manageability. This is generally recommended to improve throughput with a JDBC store when message sizes are large and when network connections to the database are slow. See the details about secure files at http://www.oracle.com/technetwork/database/availability/oraclefmw-soa-11gr1-securefiles-1842740.pdf. Combining these together, the schema for the JMS backing store would look something like this: CREATE TABLE JMS1.JMSWLSTORE (         ID INT NOT NULL,         TYPE INT NOT NULL,         HANDLE INT NOT NULL,         RECORD BLOB NOT NULL,         PRIMARY KEY (ID) USING INDEX GLOBAL PARTITION BY HASH (ID) PARTITIONS 8TABLESPACE JMSTS    )    LOB (RECORD) STORE AS SECUREFILE (TABLESPACE JMSTS ENABLE STORAGE IN ROW); The number of partitions should be a power of two to get similar sizes in the partitions. The recommended number of partitions will vary depending on the expected table/index growth and should be analyzed over time by a DBA and adjusted accordingly. See the “Oracle Database VLDB and Partitioning Guide” for other relevant parameters. Look at the custom DDL feature to use a custom JMS JDBC store table (see http://docs.oracle.com/middleware/1213/wls/CNFGD/store.htm#i1160628). Note that LLR tables are indexed on XIDSTR. Looking at some sample data, the keys are not sequential and only differ for a short middle subset so a reversed index (i.e., reversing bytes from the key) would not help in that case. Using global hash partition indexes, as mentioned above, might be a better option for LLR tables. These improvements work whether you are using Multi Data Source (MDS) or Active GridLink (AGL) running against a RAC Cluster. A new trick was added to the performance arsenal in WLS JMS 12.1.3. As with any application that uses a database, there’s overhead in all round-trips from the application to the database. The JMS JDBC store code uses batching with databases that support it. Depending on the configured batch sizes (i.e., DeletesPerBatchMaximum and InsertsPerBatchMaximum) and the number of operations in the transaction, the transaction will consist of one or more round-trips for the batch execution(s) and a round-trip for the commit. The new configuration option OraclePiggybackCommitEnabled also piggy backs the commit on the batch operation for Oracle-only Thin driver. For small transactions, a single round-trip executes the batch and does the commit, cutting in half the number of round-trips. Much work has been done looking at the overall performance for both MDS and AGL. Starting with the connection affinity enhancements in WLS 12.1.2, the performance of MDS configured with failover (as opposed to round-robin) and AGL is roughly the same. The MDS failover algorithm reserves all connections on one instance until it fails so that there is affinity of connections. Prior to WLS 12.1.2, MDS with failover has better performance than AGL because AGL does not have affinity of connections to a single instance. The use of AGL over MDS is recommended because it provides superior RAC support in many areas including management and failover (see Using Active GridLink Data Sources for more details). However, AGL is only licensed for use with WebLogic Suite. Finally, the key to performance is not only the number of work threads (as one might expect) but the number of concurrent producers and consumers. For less than ten producers and consumers, use normal services on multiple RAC instances with the hash partitioned index and secure files for the LOB, as described above. For higher concurrency (over nine producers and consumers), it is more efficient to use a singleton database service. It should be configured with a preferred instance and failover instance for High Availability. Optimization of the use of a WLS data source for a JMS JDBC store can significantly improve the performance of JMS throughput and the corresponding application. As with all performance investigations, you will need to test this with your own application and data.  

Much performance testing has been done in the area of JMS running on a JDBC store on an Oracle RAC Cluster. The goal of this article is to point to someexisting documentation, point out a new, related...

The WebLogic Server

WLS UCP Datasource

WebLogic Server UCP Data Source Type WebLogic Server (WLS) 12.2.1 introduced a new datasource type that uses the Oracle Universal Connection Pool (UCP) as an alternative connection pool.  The UCP datasource allows for configuration, deployment, and monitoring of the UCP connection pool as part of the WLS domain.  It is certified with the Oracle Thin driver (simple, XA, and replay drivers).  The product documentation is at https://docs.oracle.com/en/middleware/fusion-middleware/weblogic-server/12.2.1.4/jdbca/ucp_datasources.html .  The goal of this article  is not to reproduce that information but to summarize the feature and provide some additional information and screen shots for configuring the datasource. A UCP data source is defined using a jdbc-data-source descriptor as a system resource. The configuration  for a UCP data source is pretty simple with the standard datasource parameters.  You can  name it, give it a URL, user, password and JNDI name.  Most of the detailed configuration and tuning comes in the form of UCP connection properties.  The administrator can configure values for any setters supported by oracle.ucp.jdbc.PoolDataSourceImpl except LogWriter  (see oracle.ucp.jdbc.PoolDataSourceImpl) by just removing the "set" from the attribute name (the names are case insensitive).  For example, ConnectionHarvestMaxCount=3 Table 8-2 in the documentation lists more UCP attributes. There is some built-in validation of the (common sense) combinations of driver and connection factory: Driver Factory (ConnectionFactoryClassName) oracle.ucp.jdbc.PoolDataSourceImpl (default) oracle.jdbc.pool.OracleDataSource oracle.ucp.jdbc.PoolXADataSourceImpl oracle.jdbc.xa.client.OracleXADataSource oracle.ucp.jdbc.PoolDataSourceImpl oracle.jdbc.replay.OracleDataSourceImpl   To simplify the configuration, if the "driver-name" is not specified, it will default to oracle.ucp.jdbc.PoolDataSourceImpl and the ConnectionFactoryClassName connection property defaults to the corresponding entry from the above table. Example 8.1 in the product documentation gives a complete example of creating a UCP data source using WLST.   WLST usage is very common for application configuration these days. Monitoring is available via the weblogic.management.runtime.JDBCUCPDataSourceRuntimeMBean.  This MBean extends JDBCDataSourceRuntimeMBean so that it can be returned with the list of other JDBC MBeans from the JDBC service for tools like the administration console or your WLST script.  For a UCP data source, the state and the following attributes are set: CurrCapacity, ActiveConnectionsCurrentCount, NumAvailable, ReserveRequestCount, ActiveConnectionsAverageCount, CurrCapacityHighCount, ConnectionsTotalCount, NumUnavailable, and WaitingForConnectionSuccessTotal. Creating a WLS UCP Data Source Using the Administration Console The WLS administration console makes it easy to create, update, and monitor UCP datasources. The following images are from the administration console. For the creation path, there is a drop-down that lists the data source types; UCP is one of the choices.  The resulting data source descriptor datasource-type set to "UCP".  The first step is to specify the JDBC Data Source Properties that determine the identity of the data source. They include the datasource names, the scope (stick with Global) and the JNDI names.  The next page handles the user name and password, URL, and additional connection properties. Additional connection properties are used to configure the UCP connection pool. There are two ways to provide the connection properties for a UCP data source in the console. On the Connection Properties page, all of the available connection properties for the UCP driver are displayed so that you only need to enter the property value.  On the next page for Test Database Connection, you can enter a propertyName=value directly into the Properties text box.  Any values entered on the previous Connection Properties page will already appear in the text box.  This page can be used to test the specified values including the connection properties. The Test Database Connection page allows you to enter free-form values for properties and test a database connection before the data source configuration is finalized. If necessary, you can provide additional configuration information using the Properties, System Properties, and Encrypted Properties attributes. When using the Oracle 18c or later, the recommended URL should include CONNECT_TIMEOUT, RETRY_COUNT, RETRY_DELAY, and TRANSPORT_CONNECT_TIMEOUT, as in the following example (white space is for readability only): alias =(DESCRIPTION =    (CONNECT_TIMEOUT=90)  (RETRY_COUNT=20)(RETRY_DELAY=3) (TRANSPORT_CONNECT_TIMEOUT=3)       (ADDRESS_LIST =          (LOAD_BALANCE=on)          ( ADDRESS = (PROTOCOL = TCP)(HOST=primary-scan)(PORT=1521)))       (ADDRESS_LIST =          (LOAD_BALANCE=on)          ( ADDRESS = (PROTOCOL = TCP)(HOST=secondary-scan)(PORT=1521)))            (CONNECT_DATA=(SERVICE_NAME = myservice))) This URL has two RAC clusters that each have a SCAN address (connected via Active Data Guard or GoldenGate).  For a single RAC cluster, it would only have a single ADDRESS_LIST. The final step is to target the data source. You can select one or more targets to which to deploy your new UCP data source. If you don't select a target, the data source will be created but not deployed. You will need to deploy the data source at a later time before you can get a connection in the application. For editing the data source, minimal tabs and attributes are exposed to configure, target, and monitor this data source type.  Creating a WLS UCP Data Source Using the FMW Console The capabilities in FMWC are similar to the administrative console but with a different look and feel. If you select JDBC Data Sources from the WebLogic Domain drop-down, you will see a list of existing data sources with their associated data source type, scope, and if applicable RG, RGT and Partition. Selecting an existing DS name brings up a page to edit the DS. Selecting a resource group name (if it exists) brings up a page to edit the RG. Selecting a partition name of an existing data source brings up a page to edit the Partition attributes. Selecting Create displays a data source type drop-down where you can select UCP Data Source. The first page of the UCP creation requires the data source name, scope, JNDI names(s), and selecting a driver class name. Connection properties are input on the next page.  Unlike the administration console, the UCP connection properties are not listed.  You must add a new entry by selecting "+", type in the property name, and then enter the value.  This page is also used to test the database connection. The final page in the creation sequence allows for targeting the data source and creating the new object. Application Development with a UCP Data Source Once you have your datasource configured and deployed, you access it using a JNDI lookup in your application, as with other WLS datasource types. import javax.naming.Context; import javax.naming.InitialContext; import java.sql.Connection; import oracle.ucp.jdbc.PoolDataSource; Context ctx = new InitialContext(); PoolDataSource pds = (PoolDataSource) ctx.lookup("ucpDS"); Connection conn = pds.getConnection();. While the usage in the application looks similar to other WLS datasources, you don't have all of the features of a WLS datasource but you get additional features that the UCP connection pool supports.  Note that there is no integration of the UCP datasource with WLS security or JTA transactions.  UCP has it's own JMX management.    Start at this link for the UCP Introduction .  When you see examples that execute PoolDataSourceFactory.getPoolDataSource() and then call several setters on the datasource, this is replaced with configuring the UCP datasource in WLST, REST, or the administration console.  Pick up the example with getting the connection as above. Using a WLS UCP Data Source vs UCP Directly It's possible to write an application using the UCP API's directly within a WLS application.  There are several advantages to using the WLS UCP data source within a WLS application and framework. 1. Configuration is done using the mechanism that is used for all WLS datasources.  2. It takes the configuration, and deployment (creation) out of the java code.  Further, the deploy and undeploy can be controlled using the WLS administration mechanisms (administration console, WLST, REST). 3. It provides for monitoring using the WLS monitoring mechanisms (administration console, WLST, REST). All of the features in UCP are available including advanced features such as Sharding, draining, and Transparent Application Continuity (TAC)..  Using the example above, sharding can be introduced easily as follows. // Create a key corresponding to sharding key columns, to access the correct shard OracleShardingKey key = pds.createShardingKeyBuilder().subkey(100, JDBCType.NUMERIC).build(); // Fetch a connection to the shard corresponding to the key Connection conn = pds.createconnectionBuilder().shardingKey(key).build();  Draining and TAC are transparent to the application. Using a WLS UCP Data Source vs Active GridLink Data Source UCP and Active GridLink (AGL) both are well integrated with support of Oracle RAC databases.  That is, they can handle Fast Application Notification (FAN) events and use connections on the active nodes, support planned and unplanned database outages, handle Transparent Application Continuity (TAC), etc. AGL has the advantage that it is tightly integrated with WLS.  If you want to do XA transactions, you need to use AGL.  AGL supports WLS Transaction Management (one-phase, LLR, JTS, JDBC TLog, determiner resource, and so on), additional life cycle operations (suspend, resume, shutdown, forced shutdown, start, and so on), WLS security options and integration,  and WLS data operations such as JMS, Leasing, EJB, and so on. Using AGL requires an Oracle WebLogic Server Suite license.  

WebLogic Server UCP Data Source Type WebLogic Server (WLS) 12.2.1 introduced a new datasource type that uses the Oracle Universal Connection Pool (UCP) as an alternative connection pool.  The UCP...

Active GridLink URLs

Active GridLink (AGL) is the data source type that provides connectivity between WebLogic Server and an Oracle Database service, which may include one or more Oracle RAC clusters or Active Data Guard sites.  As the supported topologies grow to include additional features like Global Database Services (GDS) and new features are added to the Oracle networking and database support, the complexity of the URL to access this has also gotten more complex. There are lots of examples in the documentation.  This is a short article that summarizes patterns for defining the URL string for use with AGL. It should be obvious but let me start by saying AGL only works with the Oracle Thin Driver. AGL data sources only support long format JDBC URLs. The supported long format pattern is basically the following (there are lots of additional properties, some of which are described below). jdbc:oracle:thin:@(DESCRIPTION =    (CONNECT_TIMEOUT=90)  (RETRY_COUNT=20)(RETRY_DELAY=3) (TRANSPORT_CONNECT_TIMEOUT=3)       (ADDRESS_LIST =          (LOAD_BALANCE=on)          ( ADDRESS = (PROTOCOL = TCP)(HOST=primary-scan)(PORT=primary-scan-port)))       (ADDRESS_LIST =          (LOAD_BALANCE=on)          ( ADDRESS = (PROTOCOL = TCP)(HOST=secondary-scan)(PORT=secondary-scan-port)))            (CONNECT_DATA=(SERVICE_NAME = myservice))) If not using SCAN, then the ADDRESS_LIST would have one or more ADDRESS attributes with HOST/PORT pairs. It's recommended to use SCAN if possible and it's recommended to use VIP addresses to avoid TCP/IP hangs. Easy Connect (short) format URLs are not supported for AGL data sources. The following is an example of a Easy Connect URL pattern that is not supported for use with AGL data sources: jdbc:oracle:thin:[SCAN_VIP]:[SCAN_PORT]/[SERVICE_NAME] General recommendations for the URL are as follows. - Use a single DESCRIPTION.  Avoid a DESCRIPTION_LIST to avoid connection delays. - Use one ADDRESS_LIST per RAC cluster or Active Data Guard database.  In the example above, it assumes there are two RAC clusters.  If you have one RAC cluster, then you would have one address list. - Put RETRY_COUNT, RETRY_DELAY, CONNECT_TIMEOUT, and TRANSPORT_CONNECT_TIMEOUT at the DESCRIPTION level so that all ADDRESS_LIST entries use the same value.  Note that these parameters are optional.  URL's that were generated with earlier releases won't have them. - RETRY_DELAY specifies the delay, in seconds, between the connection retries.  It is new in the 12.1.0.2 release. - RETRY_COUNT is used to specify the number of times an ADDRESS list is traversed before the connection attempt is terminated. The default value is 0.  When using SCAN listeners with FAILOVER = on, setting the RETRY_COUNT parameter to 2 means the three SCAN IP addresses are traversed three times each, such that there are nine connect attempts (3 * 3). - CONNECT_TIMEOUT is used to specify the overall time used to complete the Oracle Net connect.  Set CONNECT_TIMEOUT=90 or higher to prevent logon storms.    Do not set the oracle.net.CONNECT_TIMEOUT driver property on the datasource because it is overridden by the URL property. - For JDBC drivers up through 12c, CONNECT_TIMEOUT is also used for the TCP/IP connection timeout for each address in the URL.  This second usage is preferred to be shorter.  In 18c and later, TRANSPORT_CONNECT_TIMEOUT was introduced.  - The service name should be a configured application service, not a PDB or administration service. - Specify LOAD_BALANCE=on per address list to balance the SCAN addresses.  

Active GridLink (AGL) is the data source type that provides connectivity between WebLogic Server and an Oracle Database service, which may include one or more Oracle RAC clusters or Active Data Guard...

Active GridLink Configuration for Database Outages

This article discusses designing and deploying an Active GridLink (AGL) data source to handle database down times with an Oracle Database RAC environment.  AGL Configuration for Database Outages It is assumed that an Active GridLink data source is configured as described Using Active GridLink Data Sources with the following. - FAN enabled.  FAN provides rapid notification about state changes for database services, instances, the databases themselves, and the nodes that form the cluster.  It allows for draining of work during planned maintenance with no errors whatsoever returned to applications. - Either auto-ONS or an explicit ONS configuration. - A dynamic database service.  Do not connect using the database service or PDB service – these are for administration only and are not supported for FAN. - Testing connections.  Depending on the outage, applications may receive stale connections when connections are borrowed before a down event is processed.  This can occur, for example, on a clean instance down when sockets are closed coincident with incoming connection requests. To prevent the application from receiving any errors, connection checks should be enabled at the connection pool.  This requires setting test-connections-on-reserve to true and setting the test-table (the recommended value for Oracle is “SQL ISVALID”). - Optimize SCAN usage.  As an optimization to force re-ordering of the SCAN IP addresses returned from DNS for a SCAN address, set the URL setting LOAD_BALANCE=TRUE for the ADDRESSLIST in database driver 12.1.0.2 and later.   (Before 12.1.0.2, use the connection property oracle.jdbc.thinForceDNSLoadBalancing=true.) Planned Outage Operations For a planned downtime, the goals are to achieve: - Transparent scheduled maintenance: Make the scheduled maintenance process at the database servers transparent to applications. - Session Draining: When an instance is brought down for maintenance at the database server draining ensures that all work using instances at that node completes and that idle sessions are removed. Sessions are drained without impacting in-flight work.   The goal is to manage scheduled maintenance with no application interruption while maintenance is underway at the database server.  For maintenance purposes (e.g., software and hardware upgrades, repairs, changes, migrations within and across systems), the services used are shutdown gracefully one or several at a time without disrupting the operations and availability of the WLS applications. Upon FAN DOWN event, AGL drains sessions away from the instance(s) targeted for maintenance. It is necessary to stop non-singleton services running on the target database instance (assuming that they are still available on the remaining running instances) or relocate singleton services from the target instance to another instance.  Once the services have drained, the instance is stopped with no errors whatsoever to applications.  The following is a high level overview of how planned maintenance occurs. –Detect “DOWN” event triggered by DBA on instances targeted for maintenance –Drain sessions away from that (those) instance(s) –Perform scheduled maintenance at the database servers –Resume operations on the upgraded node(s) Unlike Multi Data Source where operations need to be coordinated on both the database server and the mid tier, Active GridLink co-operates with the database so that all of these operations are managed from the database server, simplifying the process.  The following table lists the steps that are executed on the database server and the corresponding reactions at the mid tier. Database   Server Steps Command Mid Tier Reaction Stop the   non-singleton service without   ‘-force’ or relocate the singleton service.     Omitting the –server option operates on all   services on the instance. $ srvctl   stop service –db <db_name> -service <service_name> -instance   <instance_name   or   $ srvctl   relocate service –db <db_name> -service <service_name> -oldinst   <oldins> -newinst <newinst> The FAN Planned   Down (reason=USER) event for the service informs the connection pool that a service   is no longer available for use and connections should be drained.  Idle   connections on the stopped service are released immediately.  In-use connections are released when returned   (logically closed) by the application.    New connections are reserved on other  instance(s) and databases offering the   services.  This FAN action invokes draining the sessions from the   instance without disrupting the application. Disable   the stopped service to ensure it is   not automatically started again. Disabling the service is optional. This step   is recommended for maintenance actions where the service must not restart   automatically until the action has completed. .  $ srvctl   disable service –db <db_name> -service <service_name> -instance   <instance_name> No new   connections are associated with the stopped/disabled service at the mid-tier. Allow   sessions to drain.   The   amount of time depends on the application.    There may be long-running queries.    Batch programs may not be written to periodically return connections   and get new ones. It is recommended that batch be drained in advance of the   maintenance. Check   for long-running sessions. Terminate   these using a transactional disconnect.    Wait for the sessions to drain.    You can run the query again to check if any sessions remain. SQL>   select count(*) from ( select 1 from v$sessionwhere service_name in   upper('<service_name>') union all select 1 from v$transaction where   status = 'ACTIVE' )   SQL> exec dbms_service.disconnect_session(   '<service_name>', DBMS_SERVICE.POST_TRANSACTION); The   connection on the mid-tier will get an error.    If using application continuity, it’s possible to hide the error from   the application by automatically replaying the operations on a new connection   on another instance.  Otherwise, the   application will get a SQLException. Repeat   the steps above. Repeat   for all services targeted for planned maintenance   Stop the   database instance using the immediate option. $ srvctl   stop instance –db <db_name> -instance <instance_name> -stopoption   immediate No   impact on the mid-tier until the database and service are re-started. Optionally   disable the instance so that it will not automatically start again during   maintenance. This   step is for maintenance operations where the services cannot resume during   the maintenance. $ srvctl   disable instance –db <db_name> -instance <instance_name>   Perform   the scheduled maintenance work. Perform   the scheduled maintenance work – patches, repairs and changes.   Enable   and start the instance. $ srvctl   enable instance –db <db_name> -instance <instance_name>   $ srvctl start instance –db <db_name> -instance <instance_name>   Enable   and start the service back.  Check that   the service is up and running. $ srvctl   enable service –db <db_name> -service <service_name> -instance   <instance_name>   $ srvctl   start service –db <db_name> -service <service_name> -instance   <instance_name> The FAN   UP event for the service informs the connection pool that a new instance is   available for use, allowing sessions to be created on this instance at the   next request submission.  Automatic   rebalancing of sessions starts.   The following figure shows the distribution of connections for a service across two RAC instances before and after Planned Downtime.  Notice that the connection workload moves from fifty-fifty across both instances to hundred-zero.  In other words, RAC_INST_1 can be taken down for maintenance without any impact on the business operation. Unplanned Outages The configuration is the same for planned and unplanned outages. There are several differences when an unplanned outage occurs. A component at the database server may fail making all services unavailable on the instances running at that node.  There is not stop or disable on the services because they have failed. The FAN unplanned DOWN event (reason=FAILURE) is delivered to the mid-tier. For an unplanned event, all sessions are closed immediately preventing the application from hanging on TCP/IP timeouts.  Existing connections on other instances remain usable, and new connections are opened to these instances as needed. There is no graceful draining of connections.  For those applications using services that are configured to use Application Continuity, active sessions are restored on a surviving instance and recovered by replaying the operations, masking the outage from applications.  If not protected by Application Continuity, any sessions in active communication with the instance will receive a SQLException.

This article discusses designing and deploying an Active GridLink (AGL) data source to handle database down times with an Oracle Database RAC environment.  AGL Configuration for Database Outages It is...

Jakarta EE 9 is Released

As announced by the Eclipse Foundation and at JakartaOne Livestream, Jakarta EE 9 has been released.  Jakarta EE 9 is a new version of the Jakarta EE platform and component specifications.  Jakarta EE 9 transforms the use of the javax.* namespace to the jakarta.* namespace in order to enable future innovation of the specifications.  This change has been implemented in a well-defined and consistent manner that will facilitate adoption and usage by compatible implementations, tools, and applications. We have the first Jakarta EE 9 compatible implementation in Eclipse GlassFish 6.0 RC2 available today, with delivery of additional implementations expected soon.  The great community turnout for the Livestream announcement event is a testament to the momentum we have created. Jakarta EE 9 is the result of an open, community-driven, vendor-neutral process.  Whereas in Java EE releases, and in the initial Jakarta EE 8 release, Oracle played the leading role in driving specifications, the GlassFish implementation and the TCK technologies, in Jakarta EE 9 these roles were led by Kevin Sutter (Platform Spec), Steve Millidge (GlassFish), and Scott Marlow (TCK) with significant contributions by the broader Jakarta EE community.  See this JakartaOne Livestream presentation that celebrates leading contributors and committers.   A big thank you to all. Although Oracle supported others taking on leadership roles in Jakarta EE 9, we continue to invest heavily in the platform and technologies.  I’d like to personally thank Ed Bratt, Dmitry Kornilov and the entire Oracle Jakarta EE team for their extensive contributions. Names of Oracle leads on critical specification and implementation projects are listed below: Spec/Implementation Project Lead/Co-Leads  Eclipse Project for JSON Processing Dmitry Kornilov (Oracle), Nathan Mittlestat (IBM) Eclipse Project for JSON-B Dmitry Kornilov Eclipse Yasson Dmitry Kornilov Eclipse Project for JAF Lukas Jungmann Eclipse Project for JavaMail Lukas Jungmann Eclipse Project for JAX-WS Lukas Jungmann Eclipse Project for JAXB Lukas Jungmann Eclipse Project for JPA Lukas Jungmann Eclipse Implementation of JAXB Lukas Jungmann Eclipse Metro Lukas Jungmann Eclipse Jersey Jan Supol Eclipse Tyrus Jan Supol EclipseLink Lukas Jungmann (Oracle), Joe Grassel (IBM) Eclipse Project for JAX-RS Santiago Pericas-Geertsen Eclipse Grizzly Anand Francis Joseph   Many others from Oracle have contributed to the above projects, and to the transformation of the Jakarta EE TCK that will be used to certify compatible implementations across all specifications.   Thanks to all members of the Oracle Jakarta EE team. Finally, I’d like to add a note of gratitude to Bill Shannon, who passed away earlier this year. Bill’s contributions to Java EE and Jakarta EE stand alone - it was an honor to work with him. I highly recommend watching Ed Bratt’s tribute to Bill, delivered at JakartaOne Livestream.  It’s more than just the technology.  Thank you Bill.

As announced by the Eclipse Foundation and at JakartaOne Livestream, Jakarta EE 9 has been released.  Jakarta EE 9 is a new version of the Jakarta EE platform and component specifications.  Jakarta...

The WebLogic Server

Using Oracle JDBC Type Interfaces

One of the hot new features in Oracle database, first introduced in 12c, is Application Continuity (AC).  The feature basically will detect that a connection has gone bad and substitute a new one under the covers (I'll talk about it more in another article).  To be able to do that, the application is given a connection wrapper instead of the real connection.  Wrappers or dynamic proxies can only be generated for classes based on interfaces.  The Oracle types, like REF and ARRAY, were originally introduced as concrete classes.  There are new interfaces for the Oracle types that you will need to use to take advantage of this new AC feature (introduced in WLS 10.3.6 and/or the JDBC 11.2.0.3 driver). First, some history so that you understand the needed API changes.  In the early days of WebLogic data source, any references to vendor proprietary methods were handled by hard-coded references.  Keeping up with adding and removing methods was a significant maintenance problem. At the peak of the insanity, we had over one thousand lines of code that referenced Oracle-proprietary methods and the server could not run without an Oracle jar in the classpath (even for DB2-only shops).  In release 8.1 in March 2003, we introduced wrappers for all JDBC objects such that we dynamically generated proxies that implement all public interface methods of, and delegate to, the underlying vendor object.  The hard-coded references and the maintenance nightmare went away and just as importantly, we could provide debugging information, find leaked connections, automatically close objects when the containing object closed, replace connections that fail testing, etc.  The Oracle types were concrete classes so proxies were generated for these classes implementing the WLS vendor interfaces weblogic.jdbc.vendor.oracle.*.  Applications can cast to the WLS vendor interfaces or use getVendorObj to access the underlying driver object. Later, we added an option to unwrap data types, with a corresponding loss of functionality like no debug information. Although the focus of this article is Oracle types, the dynamic proxies work for any vendor.   For example, you can cast a DB2 connection to use a proprietary method  ((com.ibm.db2.jcc.DB2Connection)conn).setDB2ClientUser("myname"). Starting with Oracle driver 11.2.0.3, the database team needed wrappers for the new AC feature and introduced new interfaces.  For WebLogic data source users, that's good news - no more unwrapping, the weblogic.jdbc.vendor package is no longer needed, and it's all transparent.  Before you go and change your programs to use the new Oracle proprietary interfaces, the recommended approach is to first see if you can just use standard JDBC API's.  In fact, as part of defining the new interfaces, Oracle proprietary methods were dropped if there was an equivalent standard JDBC API or the method was not considered to add significant value.  This table defines the mapping.  The goal is to get rid of references to the first and second columns and replace them with the third column. " align="center">Old Oracle types Deprecated WLS Interface New interfaces oracle.sql.ARRAY weblogic.jdbc.vendor.oracle.OracleArray oracle.jdbc.OracleArray oracle.sql.STRUCT weblogic.jdbc.vendor.oracle.OracleStruct oracle.jdbc.OracleStruct oracle.sql.CLOB weblogic.jdbc.vendor.oracle.OracleThinClob oracle.jdbc.OracleClob oracle.sql.BLOB weblogic.jdbc.vendor.oracle.OracleThinBlob oracle.jdbc.OracleBlob oracle.sql.REF weblogic.jdbc.vendor.oracle.OracleRef oracle.jdbc.OracleRef This is a job for a shell hacker!  Much of it can be automated and the compiler can tell you if you are referencing a method that has gone away - then check if the missing method is in the equivalent jdbc.sql interface (e.g., getARRAY() becomes the JDBC standard getArray()). You can take a look at a sample program that I wrote to demonstrate all of these new interfaces at  oracletypes.txt (note that this is actually a ".java" program so rename ".txt" to ".java"). It covers programming with all of these Oracle types.  While use of Blob and Clob might be popular, Ref and Struct might not be used as much.  The sample program shows how to create, insert, update, and access each type using both standard and extension methods.  Note that you need to use the Oracle proprietary createOracleArray() instead of the standard createArrayOf(). Although the sample program doesn't use the standard createBlob() or createClob(), these are supported for the Oracle driver. The API's can be reviewed in the Javadoc at the oracle.jdbc API Reference . This is a first step toward using Application Continuity.  But it's also a good move to remove Oracle API's that will eventually go away and use standard JDBC interfaces and new Oracle interfaces.    

One of the hot new features in Oracle database, first introduced in 12c, is Application Continuity (AC).  The feature basically will detect that a connection has gone bad and substitute a new...

The WebLogic Server

ATP Database use with WebLogic Server

This blog describes the use of Oracle's Autonomous Transaction Processing (ATP) service with a WebLogic Server (WLS) datasource.  There is documentation available from various sources that sort of covers this but this document will try to pull it all together in one place for WLS and try to cover solutions for difficulties seen by our customers. This document is focused on what is more specifically called ATP-S (as opposed to ATP-D, the RAC-based version - most of the information in this article is applicable). Introduction: See Oracle Cloud Autonomous Transaction Processing for more information on ATP.   The blog Creating an Autonomous Transaction Processing (ATP) Database has screen shots for creating a new ATP database from the Cloud console, during which you specify the passwords for the database ADMIN user and download the client credentials wallet.  This blog assumes you have already completed that process and downloaded the wallet zip file.  The creation blog assumes /tmp/demoatp is used the wallet directory and this blog uses the same.  The use of these files will be described further below in relation to datasource configuration.  The only information that you need is the alias name from the tnsnames.ora file.  For WLS, you should use the alias name of the form "name_tp"; this service is configured correctly for WLS transaction processing. This blog article has functional scripts for creating the datasource using either online WLST or REST and also has screen shots for creating the datasource via the administration console.  Before creating the datasource, we need to check a few prerequisites. WebLogic Server releases 12.2.1.4.0 and 14.1.1.0.0 shipped with the 19.3 driver and support JDK8 so they work with ATP out-of-the-box.  It is recommended that you use one of these releases or later for the simplest platform to get started.  The following sections detail the requirements for running with ATP. Certification WebLogic Server12.2.1.3.0 and later are certified with ATP.  The following sections have some information about using earlier WLS versions but are not currently certified or supported. JDK Prerequisite: WebLogic Server 12.1.3 and later support JDK 8. Look at the update number for the JDK by running `java -version` and checking to see if it is 1.8.0_169 or later.  If you haven't been keeping up with quarterly JDK 8 CPU's (shame on you), you have the option of either catching up to at least update 169 or later (this is highly recommended), or you need to download the JCE Unlimited Strength Jurisdiction Policy Files 8.  See the associated README file for installation notes.  Without this, you will get a 'fatal alert: handshake_failure' when trying to connect to the database.  WebLogic Server 10.3.6 through 12.1.2 run on JDK7 (JDK 8 is not supported).  If running on JDK7, you need to download and install the JCE Unlimited Strength Jurisdiction Policy Files 7. JDBC Driver Prerequisite: WLS 12.2.1.3.0 shipped with the 12.2.0.1 Oracle driver.  There are no special requirements for using this driver and the attached scripts should work with no changes.  That makes configuration (and certification) simple.  You can skip to the next section. WLS versions 12.1.3 through 12.2.1.2.0 shipped with the 12.1.0.2 Oracle driver.  WLS  version 10.3.6 through 12.1.2 shipped with the 11.2.0.3 driver.  The 11.2.0.3 driver works with the 18c and 19c database servers.  It is possible to upgrade the 11.2.0.3 driver to to the 12.1.0.2 driver using information at this link (driver upgrades are only supported for WLS, not JRF or FA). The 11.2.0.3 and 12.1.0.2 drivers need a patch to wlserver_10.3/server/lib/ojdbc7.jar  to support TLSv1.2.  See this link to download the jar file or apply a patch for the bug 23176395. Refer to the MOS note 2122800.1 for more details. When using the pre-12.2.x driver, the use of driver connection properties for SSL configuration, as shown in the attached scripts, is not supported (you don't need to remove them from the scripts - they will be ignored).  You must instead use command-line system properties as documented at this link.  For example, set -Doracle.net.tns_admin=<absolute path to client credentials> , -Doracle.net.wallet_location=file://< absolute path to client credentials>, and -Doracle.net.ssl_version=1.2 as JVM args. For FMW/FA running on 11g, ojdbc7.jar lives at wlserver_10.3/server/lib/ojdbc7.jar and ojdbc7dms.jar lives at modules/oracle.jdbc_11.1.1/ojdbc7dms.jar.  It is also necessary to copy an updated 18c/19c version of oraclepki.jar, osdt_core.jar, and osdt_cert.jar to the oracle_common/modules directory (for example, the 18.3 driver jar files can be downloaded from ttps://www.oracle.com/database/technologies/appdev/jdbc-ucp-183-downloads.html ).  Also, add a new security provider oracle.secruity.pki.OraclePKIProvider to jdk1.7/jre/lib/security/java.security in position #3 (this allows for using SSO or PKCS12 wallets). HTTP Proxy Support: You can and may need to update to a later version of Oracle driver jar files (18c or later) if you want some newer features, such as HTTP proxy configuration. Note that this requires the environment to be running JDK8. If the client is behind a firewall and your network configuration requires an HTTP proxy to connect to the internet,  you have two options.  You can convince your network administrator to open outbound connections to hosts in the oraclecloud.com domain using port 1522 without going through an HTTP proxy.  The other option if running on JDK8 is to upgrade to the Oracle 18c or later JDBC Thin Client, which enables connections through HTTP proxies.  See the blog Oracle 18.3 Database Support with WebLogic Server for instructions on how to get the jar files and update your CLASSPATH/PRE_CLASSPATH.  In addition, you will need to update the _tp service entry in the tnsnames.ora file to change "address=" to "address= (https_proxy=proxyhostname)(https_proxy_port=80)".  Failure to do this will cause connections to the database to hang or not find the host. Configuration using WLST or REST: Now that you have the necessary credential files, JDK, and driver jar files, you are ready to create the datasource. The online-WLST script is attached at this link to online-WLST script (rename it online.py).  To run the script, assuming that the server is started with the correct JDK and driver jar files, just run java weblogic.WLST online.py The REST script is attached at this link to REST script (rename it rest.sh).    To run the script, assuming that the server is started, just run the following from the domain home directory sh ./rest.sh Both scripts create the same datasource descriptor file and deploy the datasource to a server.  Let's look at scripts to see how the datasource is configured.  In each of these scripts, the variables that you need to set are at the top so you can update the script quickly and not touch the logic.  WLST uses python variables and the REST script uses shell variables.  The alias name (serviceName variable) of the form of the form "name_tp" is taken from the tnsnames.ora file.  The URL is generated by using an @alias format "jdbc:oracle:thin:@name_tp".  For this to work, we also need to provide the directory where tnsnames.ora file is located (the "tns_admin" variable) using the "oracle.net.tns_admin" driver property.   Note that the URL information in the tnsnames.ora uses the long format so that the protocol can be alternatively specified as TCPS. The datasource name (variable "dsname") is also used to generate the JNDI name by prefixing it with "jndi." in the example.  You can change it to match your application requirements. The recommended test table name is "SQL ISVALID" for optimal testing and performance.  You can set other connection pool parameters based on the standards for your organization. ATP-S provides access to a single Pluggable DataBase (PDB) in a Container DataBase (CDB). (ATP-D provides an entire CDB and one or more PDB's.)  Most of the operations on the PDB are similar to a normal Oracle database.  For ATP-S, you have a user called ADMIN that does not have the SYSDBA role but does have some administrative permissions to do things like creating schema objects and granting permissions.  The number of sessions is configured at 100 * ATP cores. You cannot create a tablespace and the default available tablespace is named DATA and the temporary table space is named TEMP. The block size is fixed at 8k, meaning you cannot create indexes over approximately 6k in size.   Some additional information about restrictions is provided at Autonomous Transaction Processing for Experienced Oracle Database Users . ATP-S is configured to not have GRID or RAC installed.  That means that FAN is not supported and only WLS GENERIC datasources can be created (Multi Data Source and Active GridLink cannot be used).  The driver may try to get FAN events from the ONS server and fail because it isn't configured.  To avoid this, we explicitly set the driver property oracle.jdbc.fanEnabled=false.  This property is no longer needed if using the 18c or later driver. To create connections, we need to provide a user and password (variable names "user" and "password") for the datasource.  The example uses the Admin user configured when the database was created.  More likely, you will create additional users for application use.  You can use your favorite tool like SQL*Plus to create the schema objects or you can use WLS utils.Schema to create the objects. The associated SQL DDL statements are described at this link in the user guide.  The password will be encrypted in the datasource descriptor.  The remainder of the configuration is focused on setting up two-way SSL between the client and the database.  There are two options for configuring this and the credentials are available for both in the wallet zip file. For either option, we set the two driver properties oracle.net.ssl_server_dn_match=true oracle.net.ssl_version=1.2  (this should be required only for the 12.x driver) The first option is to use the Oracle auto-open SSO wallet cwallet.ora.  This use of the wallet is to provide the information for two-way SSL connection to the database.  It should not be confused with using the wallet to contain the database user/password credentials so they can be removed from the datasource descriptor (described at the wallet blog).  When using this option, the only driver property that needs to be set is oracle.net.wallet_location (variable wallet_location) to the directory where the wallet is located. The second option is to use Java KeyStore (JKS) files truststore.jks and keystore.jks.  For this option, we need to set the driver properties for javax.net.ssl.keyStoreType, javax.net.ssl.trustStoreType, javax.net.ssl.trustStore, javax.net.ssl.trustStorePassword, javax.net.ssl.keyStore, and javax.net.ssl.keyStorePassword.  We also want to make sure that the password values are stored as encrypted strings. That wraps up the discussion of datasource configuration using scripts. If you want to enable Application Continuity on the database service, you need to run a database procedure as documented at this link in the user guide and use the associated replay driver. Configuration using the Administration Console: This section has screen shots of using the administration console to do the same configuration.  The parameters are set as described in the prior section. From the Home screen, select "Data Sources" under "Services" in the middle of the screen.  Use the "New" drop-down to select "Generic Data Source". On the next screen, fill in the "JNDI Name" and click "Next". On the next screen, use the default "Database Type" of Oracle and the default "Database Driver" for the Oracle Thin Driver Service connection (you could optionally pick the XA version of the driver or the Application Continuity version of the driver).  Click "Next". On the next screen, you can leave the defaults (if you pick the XA driver, then the screen will look different with no options to select).  Click "Next". On next screen for the "Database Name", enter the name of your Service (of the form "name_tp").  The Host Name is not needed but the console requires that something is entered so for this example I will enter "garbage".  Leave the port - it won't be used either.  Enter the user name and password that you created on your database.  Click "Next". The next screen is where most of the work is.  Modify the "URL" to remove "//garbage:/1521", leaving a URL with just the alias that points into the tnsnames.ora file.  Now you need to enter the "Properties", as discussed earlier - oracle.net.tns_admin, user, oracle.net.wallet_location, oracle.jdbc.fanEnabled, oracle.net.ssl_version, and oracle.net.ssl_server_dn_match.  Alternatively, you can use the parameters for a JKS file. When using the console, JKS passwords should be encrypted using the process described at this encrypted properties blog.    You can click on the "Test Configuration" button to see that you typed everything in correctly.  Then click "Next". On the next screen, select the check boxes for the desired targets and click "Finish" to create and target the data source. That will bring you back to the summary of JDBC Data Sources.  Click on the link for the data source that you just created.  Click on the "Monitoring" tab and the "Testing" tab under Monitoring.  Select the  button for the WLS Server that you want to test and click on "Test Data Source". The screen should show that the test is successful.   Known Problems There are several problems that you may run into when using ATP that you won't see on a non-ATP database.  Maybe some of these will save you some time. 1. Application Continuity doesn't work with the 19.3 Oracle driver against a 12.2.0.1 Oracle database server.  This is fixed in the 19.6 driver. 2. The object oracle.sql.Blob is not serializable in the 19.3 Oracle driver.  The following sequence fails: Blob blob = BLOB.getEmptyBLOB(); ObjectOutputStream oos = new ObjectOutputStream(new ByteArrayOutputStream()); oos.writeObject(blob); We saw this problem in the WebLogic job scheduler that is using Blob values.  We were able to work around it by using Connection.createBlob() instead of oracle.sql.BLOB.empty_lob() (fixed in WebLogic 14.1.1.0.0 but it is not fixed in WebLogic 12.2.1.4.0).  This has been fixed in the 19.6 Oracle driver. 3. There is a limit of 100 sessions per core and it's tricky to figure out this is the cause of the failure "no more data to read from socket" when the maximum number of sessions is reached. 4. When using the "_high" service instead of the "_tp" service, we saw some very long elapsed times in the AWR report.  Stick with the "_tp" service as documented above. 5. ATP-S has a Listener rate limit of 100 connections per second to throttle connection requests (since ATP-S is a shared resource).  This would generally only be a problem in WebLogic Server when trying to initialize a datasource with a minimum count greater than 100 connections.  So one way to get around this is to set the minimum count to a value less than 100.  In WebLogic Server 12.2.1.4.0, we added driver connection properties to limit maximum number of threads that can create connections: weblogic.jdbc.maxConcurrentCreateRequests and weblogic.jdbc.concurrentCreateRequestsTimeoutSeconds (see the documentation for more information).      

This blog describes the use of Oracle's Autonomous Transaction Processing (ATP) service with a WebLogic Server (WLS) datasource.  There is documentation available from various sources that sort of...

Monitoring FAN Events

fanWatcher is a sample program to print the Oracle Notification Service (ONS) Fast Application Notification (FAN) event information. These events provide information regarding load balancing, and service and instance up and down events. This information is automatically processed by WebLogic Server Active GridLink and UCP on the mid-tier. For more information about FAN events, see this link.  The program described here is an enhancement of the earlier program described in that white paper  This program can be modified to work as desired to monitor events and help diagnose problems with configuration. The code is available this link . To run this Java application, you need to be set up to run a JDK and you need ons.jar and ojdbcN.jar in the CLASSPATH. The CLASSPATH is set differently depending on whether you are running on the database server or on the mid-tier with WebLogic Server or UCP. Make sure to use the correct path separator for CLASSPATH on your platform (';' for Windows, ':' otherwise). The general format for the command line is java fanWatcher config_type [eventtype … ] Event Type Subscription The event type sets up the subscriber to only return limited events. You can run without specifying the event type to see what types of events are returned. When you specify an event name on the command line, the program sets up the subscriber to have a simple match on the event. If the specified pattern occurs anywhere in a notification's header, then the comparison statement evaluates true. The most basic pattern match is an empty string (not null), which matches all notifications. The pattern is enclosed in double quotes (required) and prefixed with “%” to be case insensitive. Event processing is more complete than shown in this sample. The subscription string is generally composed of one or more comparison statements, each logically related to another with the boolean operators '|' for an OR relationship or '&' for an AND relationship. Parentheses are used to group these comparison statements, and the '!' operator placed before an opening parenthesis negates the evaluated value within. Each individual comparison statement must be enclosed within double quotes ('"'), and can take one of two basic forms: "pattern" or "name=value". A "pattern" is a simple string match of the notification header: if the specified "pattern" occurs anywhere in a notification's header, then the comparison statement evaluates true. The most basic pattern match is an empty string (not NULL) which matches all notifications. The "name=value" format compares the ONS notification header or property name with the name against the specified value, and if the values match, then the comparison statement evaluates true. If the specified header or property name does not exist in the notification the comparison statement evaluates false. A comparison statement will be interpreted as a case insensitive when a percent character ('%') is placed before the opening quote. Note that for "name=value" comparison statements, only the value is treated as case insensitive with this option: the name lookup will always be case sensitive. A comparison statement will be interpreted as a regular expression when a dollar sign character ('$') is placed before the opening quote. Standard POSIX regular expressions are supported. To specify a regular expression that is also case insensitive, place the dollar sign and percent sign together and in that order ("$%") before the opening quote. A special case subscription string composed of only the exclamation point character ('!') signifies that the subscription will not match any notifications. You might want to modify the event to select on a specific service by using something like %"eventType=database/event/servicemetrics/<serviceName> " Running with Database Server 10.2 or later This approach runs on the database server and connects directly to the local ONS daemon available in the Grid Infrastructure cluster. The FANwatcher utility must be run as a user that has privilege to access the $CRS_HOME/opmn/conf/ons.config, which is used by the ons daemon to start and accessed by this program. The configuration type on the command line is set to “crs”. # CRS_HOME should be set for your Grid infrastructure echo $CRS_HOME CRS_HOME=/mypath/scratch/12.1.0/grid/ CLASSPATH="$CRS_HOME/jdbc/lib/ojdbc6.jar:$CRS_HOME/opmn/lib/ons.jar:." export CLASSPATH javac fanWatcher.java java -Doracle.ons.oraclehome=$CRS_HOME fanWatcher crs Running with WLS 10.3.6 or later using an explicit node list There are two ways to run in a client environment – with an explicit node list and using auto-ONS. It’s necessary to have ojdbcN.jar and ons.jar that are available when configured for WLS. If you are set up to run with UCP directly, these should also be in your CLASSPATH. In the first approach, it will work with Oracle driver and database 11 and later (SCAN support came in later versions of Oracle including the 11.2.0.3 jar files that shipped with WLS 10.3.6). # Set the WLS environment using wlserver*/server/bin/setWLSEnv CLASSPATH="$CLASSPATH:." # add local directory for sample program export CLASSPATH javac fanWatcher.java java fanWatcher "nodes=rac1:6200,rac2:6200" database/event/service The node list is a string of one or more values of the form name=value separated by a newline character (\n). There are two supported formats for the node list. The first format is available for all versions of ONS. The following names may be specified. nodes – This is required. The format is one or more host:port pairs separated by a comma. walletfile – Oracle wallet file used for SSL communication with the ONS server. walletpassword – Password to open the Oracle wallet file. The second format is available starting in database 12.2.0.2. It supports more complicated topologies with multiple clusters and node lists. It has the following names. nodes.id—this value is a list of nodes representing a unique topology of remote ONS servers. id specifies a unique identifier for the node list. Duplicate entries are ignored. The list of nodes configured in any list must not include any nodes configured in any other list for the same client or duplicate notifications will be sent and delivered. The list format is a comma separated list of ONS daemon listen addresses and ports pairs separated by colon. maxconnections.id— this value specifies the maximum number of concurrent connections maintained with the ONS servers. id specifies the node list to which this parameter applies. The default is 3. active.id If true, the list is active and connections are automatically established to the configured number of ONS servers. If false, the list is inactive and is only be used as a fail over list in the event that no connections for an active list can be established. An inactive list can only serve as a fail over for one active list at a time, and once a single connection is re-established on the active list, the fail-over list reverts to being inactive. Note that only notifications published by the client after a list has failed over are sent to the fail over list. id specifies the node list to which this parameter applies. The default is true. remotetimeout —The timeout period, in milliseconds, for a connection to each remote server. If the remote server has not responded within this timeout period, the connection is closed. The default is 30 seconds. The walletfile and walletpassword may also be specified (note that there is one walletfile for all ONS servers). The nodes attribute cannot be combined with name.id attributes. Running with WLS using auto-ONS Auto-ONS is available starting in Database 12.1.0.1. Before that, no information is available. Auto-ONS only works with RAC configurations; it does not work with an Oracle Restart environment.  Since the first version of WLS that ships with Database 12.1 is WLS 12.1.3, this approach will only work with upgraded database jar files on versions of WLS earlier than 12.1.3. Auto-ONS works by getting a connection to the database to query the ONS information from the server. For this program to work, a user, password, and URL are required. For the sample program, the values are assumed to be in the environment (to avoid putting them on the command line). If you want, you can change the program to prompt for them or hard-code the values into the java code. # Set the WLS environment using wlserver*/server/bin/setWLSEnv # Set the credentials in the environment. If you don't like doing this, # hard-code them into the java program password=mypassword url='jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=\ (ADDRESS=(PROTOCOL=TCP)(HOST=rac1)(PORT=1521))\ (ADDRESS=(PROTOCOL=TCP)(HOST=rac2)(PORT=1521)))\ (CONNECT_DATA=(SERVICE_NAME=otrade)))' user=myuser export password url user CLASSPATH="$CLASSPATH:." export CLASSPATH javac fanWatcher.java java fanWatcher autoons fanWatcher Output The output looks like the following. You can modify the program to change the output as desired. In this short output capture, there is a metric event and an event caused by stopping the service on one of the instances. ** Event Header ** Notification Type: database/event/servicemetrics/otrade Delivery Time: Fri Dec 04 20:08:10 EST 2015 Creation Time: Fri Dec 04 20:08:10 EST 2015 Generating Node: rac1 Event payload: VERSION=1.0 database=dev service=otrade { {instance=inst2 percent=50 flag=U NKNOWN aff=FALSE}{instance=inst1 percent=50 flag=UNKNOWN aff=FALSE} } timestam p=2015-12-04 17:08:03 ** Event Header ** Notification Type: database/event/service Delivery Time: Fri Dec 04 20:08:20 EST 2015 Creation Time: Fri Dec 04 20:08:20 EST 2015 Generating Node: rac1 Event payload: VERSION=1.0 event_type=SERVICEMEMBER service=otrade instance=inst2 database=dev db_domain= host=rac2 status=down reason=USER timestamp=2015-12-04 17:  

fanWatcher is a sample program to print the Oracle Notification Service (ONS) Fast Application Notification (FAN) event information. These events provide information regardingload balancing, and...

Announcement

SOA Suite support on Kubernetes leverages WebLogic Kubernetes ToolKit

The Oracle SOA Suite team recently published a blog, Announcing Oracle SOA Suite on Containers & Kubernetes for Production Workloads.   From a technical standpoint, they have leveraged the WebLogic Server Operator and other tools in the WebLogic Kubernetes ToolKit to deliver this support.  The blog above provides more detail and we encourage you to take some time to review it and try it out. We have worked closely with the SOA Suite team, and support their offering, which is fully aligned with our strategy.   We thought it would be useful to provide some background on this, from the perspective of the Oracle WebLogic Server and Oracle Enterprise Cloud Native Java team.   Oracle’s goal in delivering WebLogic Server Kubernetes support is to meet customer requirements to evolve their WebLogic Server applications, at the pace they choose, to Kubernetes and cloud native infrastructure, where they can leverage the benefits of running on Kubernetes.  We are happy with the capabilities the WebLogic Kubernetes ToolKit provides for our customers, and intend to continue to enhance these tools over time.  In addition, Oracle is addressing other, related requirements we’re hearing from customers as they adopt WebLogic Server on Kubernetes.   These requirements, and the support we’ve delivered, often extend beyond core WebLogic Server features and tools.  For example: Coherence Kubernetes support and the Coherence Kubernetes Operator, enabling use of Coherence, as well as WebLogic Server, on Kubernetes Ongoing enhancements to Helidon for building Java microservices Including Oracle Support for Helidon in Oracle Support for WebLogic Server and Coherence Open sourcing Coherence Community Edition to broaden community adoption of Coherence in polyglot microservices architectures Support for WebLogic Server, Coherence and Helidon on GraalVM, leveraging GraalVM unique features The Verrazzano open source project for managing hybrid applications Refinement of Oracle licensing policies on Kubernetes Delivery of WebLogic Server for OKE, based on the WebLogic Kubernetes ToolKit, to simplify deploying WebLogic Server applications on Kubernetes in Oracle Cloud The SOA Suite announcement is another example of improvements delivered in products related to WebLogic Server.   We have been working with multiple Oracle product teams who use WebLogic Server in their products, and who want to make their products available on Kubernetes.  SOA Suite is an important example.   Many WebLogic Server customers are also Oracle SOA Suite, Oracle Service Bus, and Oracle Enterprise Scheduler customers, and use these products for integrations across their enterprise applications, including integrations with custom WebLogic Server applications.   As these customers evaluate moving WebLogic Server applications to Kubernetes, they are evaluating moving SOA Suite deployments to Kubernetes as well.   SOA Suite support on Kubernetes gives these customers choices, enabling them to migrate their environments at the pace they choose, as makes sense for them.   There is more to come.   The Oracle Fusion Middleware product line is largely built on WebLogic Server, and other product teams are working on similar support and certifications.   We expect to offer additional Fusion Middleware product support and customer migrations options over time.   In addition, we’re seeing adoption and support of Kubernetes by some of the Oracle Global Business Unit Application product teams.   We feel good about the use of our technology, and even better about the growing set of choices and options we’re given to Oracle customers - like you!  Keep posted for more to come.    

The Oracle SOA Suite team recently published a blog, Announcing Oracle SOA Suite on Containers & Kubernetes for Production Workloads.   From a technical standpoint, they have leveraged the...

Announcement

Use your Universal Credits with Oracle WebLogic Server for OKE

Overview We’re excited to announce the availability of new consumption-based pricing options for Oracle WebLogic Server for Oracle Cloud Infrastructure Container Engine for Kubernetes (Oracle WebLogic Server for OKE). See my earlier blog post describing Oracle WebLogic Server for OKE, and the functionality provided.  We have created new Oracle WebLogic Server for OKE listings in the Oracle Cloud Marketplace that enable you to quickly provision an Oracle WebLogic Server domain on Kubernetes, and to use preconfigured CI/CD pipelines to develop and deploy WebLogic Server applications using Oracle’s Universal Credits model (UCM). The listings we are announcing today are the following: Oracle WebLogic Server Enterprise Edition for OKE (UCM) Includes clustering for high availability, and Oracle Java SE Advanced (Java Mission Control and Java Flight Recorder) for diagnosing problems. Create a JRF-enabled domain if you want to build applications with Oracle Application Development Framework (ADF).   Oracle WebLogic Suite for OKE (UCM) Includes all of Oracle WebLogic Server Enterprise Edition for OKE plus; Oracle Coherence for increased performance and scalability, Active GridLink for RAC for advanced database connectivity and all of Internet Application Server Enterprise Edition. Quickly create Oracle WebLogic Server configuration on OKE You can easily find all the Oracle WebLogic Server listings available in the Oracle Marketplace by typing the word “WebLogic” in the search bar.  To find only the Oracle WebLogic Server for OKE listings, you can enter “oke” into the search bar. After selecting the listing that fits your needs, and after accepting the terms and conditions, all you need to do is provide some parameters and within minutes your instance will be ready to go: The Oracle WebLogic Server for OKE components Oracle WebLogic Server for OKE will create or configure the resources required for having a secure environment that follows best practices for running WebLogic Server on OKE clusters. It uses the WebLogic Kubernetes Toolkit to support WebLogic on Kubernetes. It provides, out of the box, a Jenkins controller configured to run inside your OKE cluster and launch Jenkins agents on demand to build and test your domain. NginX is configured as an ingress controller and is front-ended by a public load-balancer to access deployed applications, and by a private load-balancer to access the Oracle WebLogic Server and Jenkins administrative consoles. A bastion host is provisioned in a public subnet to enable access to private resources, such as an Administrative host that is created with all the required tools for you to manage your instance configuration. A File Storage Service is created and mounted across the different components so you can easily create component configurations that require a persistent volume.     Now let’s talk about the Node pools. Oracle WebLogic Server for OKE creates two node pools in the OKE cluster.   The first, the WebLogic node pool, is configured to run pods that use images containing WebLogic binaries.  The second, the non-WebLogic node pool, is configured to run pods that use images that do not contain WebLogic binaries. Separate node pools are created so that you will be charged for the entitlement to use WebLogic software only on the OCPUs associated with the WebLogic node pool.  You will not be charged for the entitlement to use WebLogic software on the non-WebLogic node pool.   Here are the pods that are configured to run in each node pool after provisioning is completed:   Getting Started To get started you will need an Oracle Cloud account.  If you do not already have an Oracle Cloud Account, go here to create a new Free Tier account. See the tutorial Get Started with Oracle WebLogic Server for OKE. For more information on Oracle WebLogic Server for OKE, review our product documentation. Then go to the Oracle Cloud Marketplace and test out the new way to run your Oracle WebLogic Server applications in Oracle Cloud.

Overview We’re excited to announce the availability of new consumption-based pricing options for Oracle WebLogic Server for Oracle Cloud Infrastructure Container Engine for Kubernetes (Oracle WebLogic...

Announcement

WebLogic on Azure Virtual Machines Major Release Now Available

We are delighted to announce the availability of a major release for solutions to run Oracle WebLogic Server (WLS) on Azure Linux virtual machines. The release is jointly developed with the WebLogic team as part of the broad ranging partnership between Microsoft and Oracle. The partnership also covers a range of Oracle software running on Azure including Oracle Linux and Oracle Database as well as interoperability between Oracle Cloud Infrastructure (OCI) and Azure. This major release covers various common WLS on Azure use cases such as baseimage, single working instance, clustering, load balancing via App Gateway, database connectivity and integration with Azure Active Directory. WLS is a key component in enabling enterprise Java workloads on Azure. Customers are encouraged to evaluate these solutions for full production usage and reach out to collaborate on migration cases. Use Cases and Roadmap The partnership between Oracle and Microsoft was announced in June of 2019. Under the partnership, we announced the initial release of the WLS on Azure Linux virtual machines solutions at Oracle Open World. The solutions facilitate easy lift-and-shift migration by automating boilerplate operations such as provisioning virtual networks/storage, installing Linux/Java resources, setting up WLS as well as configuring security with a network security group. The initial release supported a basic set of use cases such as single working instance and clustering. In addition, the release supported a limited set of WLS and Java versions. This release expands the options for operating system, Oracle JDK and WLS combinations. The release also automates common Azure service integrations for load-balancing, databases and security. A subsequent release by the end of calendar year 2020 will deliver distributed logging via Elastic Stack as well as distributed caching via Oracle Coherence. Oracle and Microsoft are also working on enabling similar capabilities on the Azure Kubernetes Service (AKS) using the WebLogic Kubernetes Operator. Solution Details There are four offers available to meet different scenarios. Single Node     This offer provisions a single virtual machine and installs WLS on it. It does not create a domain or start the Administration Server. This is useful for scenarios with highly customized domain configuration.  Admin Server     This offer provisions a single virtual machine and installs WLS on it. It creates a domain and starts up the Administration Server, which allows you to manage the domain.  Cluster     This offer creates an n-node highly available cluster of WLS virtual machines. The Administration Server and all managed servers are started by default, which allow you to manage the domain. Dynamic Cluster     This offer creates a highly available and scalable dynamic cluster of WLS virtual machines. The Administration Server and all managed servers are started by default, which allow you to manage the domain. The solutions will enable a variety of robust production-ready deployment architectures with relative ease, automating the provisioning of most critical components quickly - allowing customers to focus on business value add. These offers are Bring-Your-Own-License. They assume you have already procured the appropriate licenses with Oracle and are properly licensed to run offers in Azure. The offers enable both Java EE 7 and Java EE 8, letting you choose from a variety of base images including WebLogic 12.2.1.4.0 and WebLogic 12.2.1.3.0 with JDK8u131/251 and Oracle Linux 7.4/7.6 or WebLogic 14.1.1 with JDK11u01 on Oracle Linux 7.6. All base images are also available on Azure on their own. The standalone base images are suitable for customers that require very highly customized Azure deployments. Summary Customers interested in WLS on Azure virtual machines should explore the solutions, provide feedback and stay informed of the roadmap, including upcoming WLS enablement on AKS. Customers can also take advantage of hands-on help from the engineering team behind these offers. The opportunity to collaborate on a migration scenario is completely free while the offers are under active initial development.    

We are delighted to announce the availability of a major release for solutions to run Oracle WebLogic Server (WLS) on Azure Linux virtual machines. The release is jointly developed with the WebLogic...

Announcement

Oracle WebLogic Server for OKE Now Available on Oracle Cloud Marketplace

We’re excited to announce that Oracle WebLogic Server for Oracle Cloud Infrastructure Container Engine for Kubernetes (WebLogic Server for OKE) is now available through the Oracle Cloud Marketplace. Configuring Oracle WebLogic Server on OKE has never been so easy and fast. Within minutes you can generate an Oracle WebLogic Server configuration on OKE and on top of that you will get a Jenkins controller configured for OKE and CI/CD pipeline jobs to support Oracle WebLogic Server life cycle management operations. The Oracle WebLogic Server for OKE topology In each configuration you create, you will get: An OKE cluster deployed in a private subnet with two node pools. A File Storage Service that is shared across pods. An Administrative host deployed in a private subnet to easily access the following: The OKE cluster Logs of Oracle WebLogic Server domain Jenkins home configuration Helper scripts to manage your domain The shared File System. A bastion host deployed in a public subnet to access the resources deployed in the private subnet. An Internal load balancer to access the Jenkins console and Oracle WebLogic Server Administrative Console An External load balancer to access the Oracle WebLogic Server cluster. The OKE deployments The following applications are installed in the Kubernetes cluster using Helm: Jenkins CI. Configured to run as a pod in the non-WebLogic node pool, the Jenkins home configuration is stored in the shared File System. The Jenkins agents are configured to run as pods in the WebLogic node pool and are created on demand. NginX. The ingress controller is configured to run in the non-WebLogic node pool. As part of this module the required ingress rules are configured to route the requests coming from the internal Load Balancer to the administrative consoles. WebLogic Kubernetes Operator. The operator is configured to run in the non-WebLogic node pool The Jenkins CI/CD pipeline Jobs The first time you login to the Jenkins console you will be prompted to create the first admin user. Out of the box you get the following Jenkins pipeline jobs: sample-app: Example job to deploy the sample-app application. jdk-patch: Use this job to quickly install a new JDK into the existing image. opatch-update: Use this job to install individual Oracle WebLogic Server patches. update-domain: Provide the WebLogic Deploy Tooling files to update your existing domain. rebase-full-install: Use this job to generate an image with a custom JDK, WebLogic binaries and patches. test-and-deploy-domain-job: This job is automatically run at the end of the other five jobs to test and deploy the domain. These pipelines jobs use the WebLogic Kubernetes Toolkit to create the domain configuration and domain image that are used to run the WebLogic domain in the OKE cluster. The WebLogic Kubernetes domain model currently supported is Domain Home in Image. We are actively evaluating support for the Model in Image domain model as well. Getting Started To get started you will need an Oracle Cloud account.  If you do not already have an Oracle Cloud Account, go here to create a new Free Tier account. See the tutorial Get Started with Oracle WebLogic Server for OKE. For more information on Oracle WebLogic Server for Oracle Cloud Infrastructure Container Engine for Kubernetes, review our product documentation. Then go to the Oracle Cloud Marketplace and test out a new way to run your Oracle WebLogic Server applications in Oracle Cloud.

We’re excited to announce that Oracle WebLogic Server for Oracle Cloud Infrastructure Container Engine for Kubernetes (WebLogic Server for OKE) is now available through the Oracle Cloud Marketplace....

Announcement

Disaster Recovery in Oracle WebLogic Server for Oracle Cloud Infrastructure

Disaster Recovery procedures for IT systems is a must for any Business Continuity Plan where the time to get back on line is critical. In these systems you need to setup backup environments that are geographically dispersed and maintain the same configuration in both systems, so if one goes down you can switch to the other system with minimal effort and downtime. It is with great pleasure that I share with you the Oracle WebLogic Server for Oracle Cloud Infrastructure Disaster Recovery guide, the most recent addition to the Oracle Maximum Availability Architecture (MAA). This document supports an active-passive topology where only one of the environments is up and running (Primary) and the other is in standby mode (Secondary). It covers automatic replication of the database using DataGuard and either manual or automatic synchronization of the WebLogic domain.     The document includes all the scripts required for configuring your database, your WebLogic domain, and replicating this configuration between the primary and the secondary environment. The document also includes well defined steps for executing a planned switchover or a failover. The Oracle WebLogic Server for Oracle Cloud Infrastructure Disaster Recovery pdf file can be found in the MAA Best Practices - Oracle Fusion Middleware and in the MAA Best Practices for the Oracle Cloud web pages. If you want to take a look at the latest features of Oracle WebLogic for Oracle Cloud Infrastructure you can look at the What’s new web page.

Disaster Recovery procedures for IT systems is a must for any Business Continuity Plan where the time to get back on line is critical. In these systems you need to setup backup environments that are...

Jakarta Tech Talk Tuesday Sept 1 - Managing State in Elastic Microservices

On Tuesday, September 1, at 11AM EDT Aleks Seovic will be presenting a virtual Jakarta Tech Talk on "Managing State in Elastic Microservices".   You can register for this event here. Jakarta Tech Talks are both conference based and virtual webinars dedicated to the discussion and discovery of Jakarta EE and broader Cloud-Native Java technologies.  The abstract for this particular event is given below. Scaling stateless services is easy, but scaling their stateful data stores, not so much. This is true whether you are using an “old fashioned" relational database, or one of the popular, “modern" KV data stores, such as MongoDB or Redis. In this presentation we will discuss some of the issues with state management in elastic microservices today, and look into how Oracle Coherence, with its Helidon and Eclipse MicroProfile integration, provides a better alternative you can use tomorrow. This will be a technical presentation by an architect who has deep technical understanding of microservices and data grid technologies, and real-world experience building and deploying mission-critical applications.   Aleks' talk will include a demo showing how Java microservices can be easily scaled, including flexible scaling of the data store used by the microservices, using an example you can evaluate yourselves following his talk.   The example is built with Helidon 2.0  and Coherence Community Edition  announced in June.  Both Helidon and Coherence can be used with WebLogic Server to build and evolve applications, either on-premises or in the cloud, so the content will be relevant and thought-provoking for WebLogic Server users.  Register today!   

On Tuesday, September 1, at 11AM EDT Aleks Seovic will be presenting a virtual Jakarta Tech Talk on "Managing State in Elastic Microservices".   You can register for this event here. Jakarta Tech...

The WebLogic Server

Using Orachk for Application Continuity Coverage Analysis

As I described in the blog Part2 - 12c Database and WLS - Application continuity, Application Continuity (AC) is a great feature for avoiding errors to the user with minimal changes to the application and configuration. In the blog Using Orachk to Clean Up Concrete Classes for Application Continuity, I described the one of the many uses of the Oracle utility program, focusing on checking for Oracle concrete class usage that needs to be removed to run with AC. The download page for Orachk is at https://support.oracle.com/epmos/faces/DocumentDisplay?id=1268927.2. This article will focus a second analysis that can be using Orachk to see if your application workload will be protected by AC. Before running the workload to test AC, you need to turn on a specific tracing flag on the database server to see RDBMS-side program interfaces for AC. Run the following statement on the database server so that the trace files will output the needed information.  Normally this would be run as the DBA, likely in sqlplus or sqldeveloper.  This will enable tracing for all sessions. SQL> alter system set event='10602 trace name context forever, level 28:trace[progint_appcont_rdbms]:10702 trace name context forever, level 16' ; Then run the application workload.  This will produce trace output that can be analyzed with orachk. There are three values that control the AC checking (called acchk in orachk) for protection analysis. The values can be set either on the command line or via shell environment variable (or mixed). They are the following. Command Line Argument Shell Environment Variable Usage -javahome JDK8dirname RAT_JAVA_HOME This must point to the JAVA_HOME directory for a JDK8 installation. -apptrc dirname RAT_AC_TRCDIR To analyze trace files for AC coverage, specify a directory name that contains one or more database server trace files. The trace directory is generally $ORACLE_BASE/diag/rdbms/$ORACLE_UNQNAME/$ORACLE_SID/trace This test works 12 database server and later since AC was introduced in that release. NONE RAT_ACTRACEFILE_WINDOW Optionally, when scanning the trace directory for trace files, this optional value limit the analysis to the specified most recent number of days. There may be thousands of files and this parameter drops files older than the specified number of days. For example, $ ./orachk -acchk -javahome /tmp/jdk1.8.0_40 -apptrc $ORACLE_BASE/diag/rdbms/$ORACLE_UNQNAME/$ORACLE_SID/trace Understanding the analysis requires some understanding of how AC is used. First, it’s only available when using a replay driver, e.g., oracle.jdbc.replay.DataSourceImpl. Second, it’s necessary to identify request boundaries to the driver so that operations can be tracked and potentially replayed if necessary. The boundaries are defined by calling beginRequest by casting the connection to oracle.jdbc.replay.ReplayableConnection, which enables replay, and calling endRequest, which disables replay. 1. If you are using UCP or WLS, the boundaries are handled automatically when you get and close a connection. 2. If you aren’t using one of these connection pools that are tightly bound to the Oracle replay driver, you will need to do the calls directly in the application. 3. If you are using a UCP or WLS pool but you get a connection and hold onto it instead of regularly returning it to the connection pool, you will need to handle the intermediate request boundaries. This is error prone and not recommended. 4. If you call commit on the connection, replay is disabled by default.  For customers using 19.x and later, the recommendation is to use TAC.  For customers using earlier releases, you can set the service SESSION_STATE_CONSISTENCY to STATIC mode instead of the default DYNAMIC mode, then a commit does not disable replay. See the first link above for further discussion of SESSION_STATE_CONSISTENCY. If you are using the default, you should close the connection immediately after the commit. Otherwise, the subsequent operations are not covered by replay for the remainder of the request. 5. It is also possible for the application to explicitly disable replay in the current request by calling disableReplay() on the connection. 6. There are also some operations that cannot be replayed and calling one will disable replay in the current request. The following is a summary of the coverage analysis. - If a round-trip is made to the database server after replay is enabled and not disabled, it is counted as a protected call. - If a round-trip is made to the database server when replay has been disabled or replay is inactive (not in a request, or it is a restricted call, or the disable API was called), it is counted as an unprotected call until the next endRequest or beginRequest. - Calls that are ignored for the purpose of replay are ignored in the statistics. At the end of processing a trace file, it computes (protected * 100) / (protected + unprotected) to determine PASS (>= 75), WARNING ( 25 <= value <75) and FAIL (< 25). Running orachk produces a directory named orachk_<uname>_<date>_<time>. If you want to see all of the details, look for file named o_coverage_classes*.out under the outfiles subdirectory. It has the information for all of the trace files. The program generates an html file that is listed in the program output. It drops output related to trace files that PASS (but they might not be 100%). If you PASS but you don’t get 100%, it’s possible that an operation won’t be replayed. The output includes the database service name, the module name (from v$session.program, which can be set on the client side using the connection property oracle.jdbc.v$session.program), the ACTION and CLIENT_ID (which can be set using setClientInfo with "OCSID.ACTION" and "OCSID.CLIENTID” respectively). The following is an actual table generated by orachk. Outage Type Status Message             Coverage checks   TotalRequest = 25 PASS = 20 WARNING = 5 FAIL = 0         WARNING [WARNING] Trace file name = orcl1_ora_10046.trc Row number = 738 SERVICE NAME = (dbhost.us.oracle.com) MODULE NAME = (ac_1_bt) ACTION NAME = (qryOrdTotal_SP@alterSess_OrdTot) CLIENT ID = (clthost-1199-Default-3-jdbc000386) Coverage(%) = 66 ProtectedCalls = 2 UnProtectedCalls = 1   WARNING [WARNING] Trace file name = orcl1_ora_10046.trc Row number = 31878 SERVICE NAME = (dbhost.us.oracle.com) MODULE NAME = (ac_1_bt) ACTION NAME = (qryOrder3@qryOrder3) CLIENT ID = (clthost-1199-Default-2-jdbc000183) Coverage(%) = 66 ProtectedCalls = 2 UnProtectedCalls = 1   WARNING [WARNING] Trace file name = orcl1_ora_10046.trc Row number = 33240 SERVICE NAME = (dbhost.us.oracle.com) MODULE NAME = (ac_1_bt) ACTION NAME = (addProduct@getNewProdId) CLIENT ID = (clthost-1199-Default-2-jdbc000183) Coverage(%) = 66 ProtectedCalls = 2 UnProtectedCalls = 1   WARNING [WARNING] Trace file name = orcl1_ora_10046.trc Row number = 37963 SERVICE NAME = (dbhost.us.oracle.com) MODULE NAME = (ac_1_bt) ACTION NAME = (updCustCredLimit@updCustCredLim) CLIENT ID = (clthost-1199-Default-2-jdbc000183-CLOSED) Coverage(%) = 66 ProtectedCalls = 2 UnProtectedCalls = 1   WARNING [WARNING] Trace file name = orcl1_ora_32404.trc Row number = 289 SERVICE NAME = (orcl_pdb1) MODULE NAME = (JDBC Thin Client) ACTION NAME = null CLIENT ID = null Coverage(%) = 40 ProtectedCalls = 2 UnProtectedCalls = 3   PASS Report containing checks that passed: /home/username/orachk/orachk_dbhost_102415_200912/reports/acchk_scorecard_pass.html   If you are not at 100% for all of your trace files, you need to figure out why. Make sure you return connections to the pool, especially after a commit. To figure out exactly what calls are disabling replay or what operations are done after commit, you should turn on replay debugging at the driver side. This is done by running with the debug driver (e.g., ojdbc8_g.jar), setting the command line options -Doracle.jdbc.Trace=true  -Djava.util.logging.config.file=file.properties, and including the following line in the properties file: oracle.jdbc.internal.replay.level=FINEST. The orachk utility can help you get your application up and running using Application Continuity. - Get rid of Oracle concrete classes. - Analyze the database operations in the application to see if any of them are not protected by replay.

As I described in the blog Part2 - 12c Database and WLS - Application continuity, Application Continuity (AC) is a great feature for avoiding errors to the user with minimal changes to the application...

Using Orachk to Clean Up Concrete Classes for Application Continuity

As I described in the blog Part 2 - 12c Database and WLS - Application continuity, Application Continuity (AC) is a great feature for avoiding errors to the user with minimal changes to the application and configuration. Getting rid of any references to Oracle concrete classes is the first step. Oracle has a utility program that you can download from MOS to validate various hardware, operating system, and software attributes associated with the Oracle database and more (it’s growing). The program name is orachk. In version 12.1.0.2.4 and later, there are some checks available for applications running with AC. There is enough documentation about getting started with orachk so I’ll just say to download and unzip the file. The AC checking is part of a larger framework that will have additional analysis in future versions. This article focuses on the analysis for Oracle concrete classes in the application code. AC is unable to replay transactions that use oracle.sql deprecated concrete classes of the form ARRAY, BFILE, BLOB, CLOB, NCLOB, OPAQUE, REF, or STRUCT as a variable type, a cast, the return type of a method, or calling a constructor. See New Jdbc Interfaces for Oracle types (Doc ID 1364193.1) for further information about concrete classes. They must be modified for AC to work with the application. See Using API Extensions for Oracle JDBC Types for many examples of using the newer Oracle JDBC types in place of the older Oracle concrete types. There are four values that control the AC checking (called acchk in orachk) for Oracle concrete classes. They can be set either on the command line or via shell environment variable (or mixed). They are the following. Command Line Argument Shell Environment Variable Usage –asmhome jarfilename   RAT_AC_ASMJAR This must point to a version of asm-all-5.0.3.jar that you download from http://asm.ow2.org/. -javahome JDK8dirname RAT_JAVA_HOME This must point to the JAVA_HOME directory for a JDK8 installation. -appjar dirname RAT_AC_JARDIR To analyze the application code for references to Oracle concrete classes like oracle.sql.BLOB, this must point to the parent directory name for the code. The program will analyze .class files, and recursively .jar files and directories. Starting in version 12.1.0.2.5, the Oracle concrete class checking has been enhanced to recursively expand .ear, .war, and .rar files in addition to .jar files. You no longer need to explode these archives into a directory for checking. This is a big simplification for customers using Java EE applications. Just specify the root directory for your application when setting the command line option -appjar dirname or the environment variable RAT_AC_JARDIR.  The orachk utility will do the analysis. This test works with software classes compiled for Oracle JDBC 11 and later. -jdbcver RAT_AC_JDBCVER Target version for the coverage check. When you run the AC checking, the additional checking about database server, etc. is turned off. It would be common to run the concrete class checking on the mid-tier to analyze software that accesses the Oracle driver. I chose some old QA test classes that I knew had some bad usage of concrete classes and ran the test on a small subset for illustration purposes. The command line was the following. $ ./orachk -asmhome /tmp/asm-all-5.0.3.jar -javahome /tmp/jdk1.8.0_40 -jdbcver 19.3 -appjar /tmp/appdir This is a screen shot of the report details. There is additional information reported about the machine, OS, database, timings, etc. From this test run, I can see that my one test class has five references to STRUCT that need to be changed to java.sql.Struct or oracle.jdbc.OracleStruct. Note that WLS programmers have been using the weblogic.jdbc.vendor.oracle.* interfaces for over a decade to allow for wrapping Oracle concrete classes and this AC analysis doesn’t pick that up (there are five weblogic.jdbc.vendor.oracle.* interfaces that correspond to concrete classes). These should be removed as well. For example, trying to run with this ORACLE extension API and the WLS wrapper import weblogic.jdbc.vendor.oracle.OracleThinBlob; rs.next(); OracleThinBlob blob = (OracleThinBlob)rs.getBlob(2); java.io.OutputStream os = blob.getBinaryOutputStream(); on a Blob column using the normal driver works but using the replay driver yields java.lang.ClassCastException: weblogic.jdbc.wrapper.Blob_oracle_jdbc_proxy_oracle$1jdbc$1replay$1driver$1TxnReplayableBlob$2oracle$1jdbc$1internal$1OracleBlob$$$Proxy cannot be cast to weblogic.jdbc.vendor.oracle.OracleThinBlob It must be changed to use the standard JDBC API rs.next(); java.sql.Blob blob = rs.getBlob(2); java.io.OutputStream os = blob.setBinaryStream(1); So it’s time to remove references to the deprecated Oracle and WebLogic classes and preferably migrate to the standard JDBC API’s or at least the new Oracle interfaces. This will clean up the code and get it ready to take advantage of Application Continuity in the Oracle database. The ORAchk download page is at https://support.oracle.com/epmos/faces/DocumentDisplay?id=1268927.2 .  

As I described in the blog Part 2 - 12c Database and WLS - Application continuity, Application Continuity (AC) is a great feature for avoiding errors to the user with minimal changes to the...

Announcement

WebLogic Kubernetes ToolKit Update – Operator 3.0.0

We are very excited to announce the release of WebLogic Kubernetes Operator 3.0.0.  This latest version of the operator introduces features and support that gives our users flexibility when applying updates to their domains and applications, and when automating CI/CD pipelines. WebLogic Kubernetes Operator 3.0.0 supports: A new Model in Image pattern Kubernetes 1.16, 1.17, 1.18 Istio Helm 3 OLCNE 1.1 certification Performance improvements so that a single operator can manage many domains For Domain in Persistent Volume (PV), the ability to apply topology changes to the domain which the operator can uptake without requiring any downtime The WebLogic Kubernetes Operator is one five open source tools that compose the WebLogic Kubernetes ToolKit. The ToolKit lets users migrate their existing applications, manage and update their domains, deploy and update their applications, monitor them, persist the logs, and automate the creation and patching of images. The tools in the ToolKit are: WebLogic Kubernetes Operator – for management WebLogic Deploy Tooling – for migration and configuration WebLogic Image Tool – for image creation and patching WebLogic Monitoring Exporter – for monitoring in Prometheus WebLogic Logging Exporter – for logging in the Elastic Stack WebLogic Kubernetes Operator 3.0.0 A Kubernetes Operator is an application-specific controller that extends the Kubernetes API to create, configure, and manage instances of complex applications. We have adopted the operator pattern to integrate WebLogic Server and Kubernetes, allowing Kubernetes to serve as a container infrastructure hosting WebLogic Server instances. The WebLogic Kubernetes Operator extends Kubernetes to enable creation, configuration, and management of a WebLogic Server domain.  Find the GitHub project for the WebLogic Kubernetes Operator and documentation that includes a Quick Start guide and samples. The WebLogic Kubernetes Operator contains built-in knowledge about how to perform lifecycle operations on a WebLogic Server domain. Some of the operations performed by the operator are: Provisioning a WebLogic Server domain Managing WebLogic Server life cycle Updating domains and applications Scaling and shrinking WebLogic Server clusters Securing configurations by defining RBAC roles Creating Kubernetes services for communication between pods, and for load balancing requests across pods The operator supports three different domain home source types (patterns): Domain home in PV, Domain Home in Image, and Model in Image.  The differences between these patterns are the domain home location and how you update the domain configuration and applications. Domain in PV: You supply your domain home configuration in a persistent volume. Domain in Image: You supply your domain home in a Docker image. Model in Image: You supply a WebLogic Deploy Tooling (WDT) model and application archive in the image. There are advantages to each domain home source type, but sometimes there are technical limitations of various cloud providers that may make one type better suited to your needs.  See the documentation  that compares the three patterns and will help you choose one. WebLogic Deploy Tooling The WebLogic Deploy Tooling makes the automation of WebLogic Server domain provisioning and applications deployment easy. Instead of writing WLST scripts that need to be maintained, WDT creates a declarative, metadata model that describes the domain, applications, and the resources used by the applications.  This metadata model makes it easy to provision, deploy, and perform domain configuration lifecycle operations in a repeatable fashion.  In the GitHub project for WebLogic Deploy Tooling, you will find the documentation and samples.  Read the blog Make WebLogic Domain Provisioning and Deployment Easy! which walks you through a complete sample.   WDT enables you to “discover” and introspect a domain configuration with its application binaries, and to “create” a replica of the domain configuration and application in another location.  WDT supports discovering and introspecting WebLogic Server 10.3.6, 12.1.3, 12.2.1.x, and 14.1.1.x domains, and creating replicas of these domains in WebLogic Server 12.2.1.x or 14.1.1.x. When invoking WDT Discover, two files are created: A YAML model file, the metadata model of the domain configuration.  An archive ZIP file containing the application binaries. After the YAML model file has been created, you can customize and validate your configuration to meet Kubernetes requirements. The metadata representation of the domain configuration is a simpler and more readable way to represent the domain configuration.  Two important features of WDT, which provide flexibility in creating or modifying domain configuration, are the ability to combine models in the order specified, and using model macros to reference arbitrary properties, environment variables, or secrets from model files. Invoking WDT Create, takes the WDT YAML model file and archive, and creates a domain home with the application deployed. You will find a sample of creating a Docker image with the domain home and application deployed to it with WDT at GitHub Sample. WDT provides additional tools to make it easier for you to write your own models, validate models, and compare models.  The Model Help Tool provides examples about the folders and attributes when creating a new domain model, or expanding an existing model, including discovered models. The Compare Model Tool lets you look at the differences between two models. The Validate Model Tool provides validation of the model before the domain is created. WDT also supports modifying domain configuration and applications.  The  Update Domain Tool modifies domain configuration by adding or removing objects from the domain configuration and the Deploy Applications Tool deploys and undeploys applications. WDT also supports sparse models where the model only describes what is required for the specific operation without describing other artifacts. For example, to deploy or modify a JDBC data source in an existing domain, the model needs to describe only the data source in question. With the new operator 3.0 release, we are enabling support for deployment of WebLogic Server domains in Kubernetes using WDT models included in WebLogic Server Docker images.   We recommend that such images be created with the WebLogic Image Tool. The WebLogic Image Tool The WebLogic Image Tool is an open source tool that lets you automate building, patching, and updating your WebLogic Server Docker images, including your own customized images.  Find the WebLogic Image Tool’s GitHub project at https://github.com/oracle/weblogic-image-tool. For more detailed information, read the blog Automate WebLogic Image Building and Patching. There are four major use cases for this tool: To create a customized WebLogic Server Docker image where you can choose: The OS base image (e.g. Oracle Linux 7.5). The version of Java (e.g. 8u261). The version of WebLogic Server or Fusion Middleware Infrastructure (FMW Infrastructure) installer (e.g. 12.2.1.3, 12.2.1.4, 14.1.1.0). A specific WebLogic Server Patch Set Update (PSU). One or more interim or “one-off” WebLogic Server patches. To update and patch with WebLogic Server PSUs or one-off patches, a customized WebLogic Server Docker image. To build an image where the domain home, including the deployed applications, are inside of the image. To build an image which stores the WDT YAML model file and application archive in a specific directory in the image.   The WebLogic Image Tool integrates with WebLogic Deploy Tooling to create WebLogic Server or FMW Infrastructure images containing a domain or including a WDT model that the operator can then deploy and manage in a Kubernetes cluster. Model in Image pattern Operator 3.0.0 introduces a new domain source type (pattern) called Model in Image.  Model in Image is an alternative to the operator’s Domain in Image and Domain in PV patterns.   The Model in Image pattern enables creating WebLogic Server Kubernetes configurations from Docker images that contain WDT model files and application archives.   Using the WebLogic Image Tool, you generate a Docker image that contains the WebLogic binaries and places the WDT YAML model file and application archive in a specific directory inside of the image for the operator to retrieve before running the servers in Kubernetes pods.  When operator 3.0.0 detects that the WebLogic Server Docker image being referenced contains a WDT model, it will create the WebLogic Server domain configuration in Kubernetes, based on the content of the WDT model.  The operator invokes WDT Create, taking the WDT model inside the image, and any additional models in a WDT ConfigMap, aggregates them and creates a domain home.  The operator encrypts and packages the domain, places it in a ConfigMap, and then starts WebLogic Server pods based on the domain home. The server pods will obtain their domain home from the ConfigMap. Unlike Domain in PV and Domain in Image, Model in Image eliminates the need to pre-create your WebLogic domain home prior to deploying your domain resource.  For more details about the Model in Image pattern, see the Model In Image documentation and try the sample in our operator GitHub project. When you want to perform configuration updates, you can provide one or more sparse models containing the configuration changes in a WDT ConfigMap or create a new image with the WDT models containing the changes. You would ask the operator to apply the changes and propagate them to a running domain. Then the operator will invoke WDT Update to make the changes to the domain home and initiate a rolling restart; the servers will now point to the updated domain in the ConfigMap. After the domain rolling restart, the configuration changes are visible in the WebLogic Server Administration Console, via REST, or WLST. The advantages of the Model in Image pattern are: Model in Image is based on an image that is portable between environments. It supports making dynamic changes to the domain configuration, both the first time the domain is deployed, as well when the domain configuration is updated. The domain is more secure because no encryption keys or credentials are kept in the image. Model in Image embraces automated CI/CD pipelines. Coming Next Our future plans include continued enhancements of the operator, for example, to provide the ability to make configuration changes dynamically, without requiring a rolling restart of the WebLogic domain, and to optimize the operator so that it can manage WebLogic domains in very large Kubernetes clusters (such as, about 1,600 namespaces).  If you are interested in deploying a WebLogic domain and operator 3.0.0, clone the WebLogic Kubernetes Operator project and try the different samples. We hope this announcement is helpful to those of you seeking to deploy WebLogic Server on Kubernetes, and we look forward to your feedback.  

We are very excited to announce the release of WebLogic Kubernetes Operator 3.0.0.  This latest version of the operator introduces features and support that gives our users flexibility when applying...

Coherence Community Edition and Helidon 2.0 Announcement - Replays Available

On June 25, the Coherence and Helidon teams announced the releases of Coherence Community Edition and Helidon 2.0. These products make it easy to build fast, lightweight, scalable and reliable Java microservices, and to integrate them into polyglot architectures. Coherence Community Edition is a new open source project containing the core of the Oracle Coherence product offerings.  Coherence Community Edition contains the key In-Memory Data Grid functionality necessary to write modern cloud microservices applications.   Two releases are available today: 14.1.1.0.1, the core of all Coherence 14.1.1 editions as of patch set update 1; and 20.06, the June 2020 interim release containing additional new features for microservices developers. Oracle continues to provide the existing commercial editions of Oracle Coherence, Oracle Coherence Enterprise Edition and Oracle Coherence Grid Edition, that add additional value and feature content to the Coherence Community Edition core.   See the Coherence announcement blog for more information.  Helidon 2.0 is a new release of Helidon, which provides a set of libraries that simplify development of Java microservices.  Helidon 2.0 adds significant improvements for both the Helidon SE and Helidon MP programming styles.  It includes new Helidon MP GraalVM Native Image support, a new CLI for building applications supporting multiple packaging profiles, new reactive programming feature support, support for new MicroProfile and Jakarta EE APIs, Java SE 11 support and more.  See the Helidon 2.0 announcement blog for more information. The products, and a demo application illustrating how the products can be used together, were announced in two technical webinars delivered on June 25.   In case you were not able to participate, replays of these webcasts are now available: Replay of Webinar for Asia-Pacific time zones Replay of Webinar for Europe, Middle East, Africa and Americas time zones We believe these announcements will provide Oracle WebLogic Server users and Java EE and Jakarta EE developers with new insights into how they can evolve existing applications, and develop new applications.  Check them out!   

On June 25, the Coherence and Helidon teams announced the releases of Coherence Community Edition and Helidon 2.0. These products make it easy to build fast, lightweight, scalable and reliable Java...

Registration is Open for Coherence CE and Helidon 2.0 Webinars on June 25

This will be a very interesting event for Oracle WebLogic Server users and Java EE and Jakarta EE developers.   Many of you are already familiar with Helidon, which provides a set of Java libraries for developing microservices, and Oracle Coherence, the leading In-Memory Data Grid technology built in Java.   Both products can be used on a standalone basis, and can also be used together with Oracle WebLogic Server. Oracle Coherence can be used to improve the performance and availability of Oracle WebLogic Server applications, and to provide complementary processing of data stored on the data grid.  Helidon can used used to deploy Java microservices alongside WebLogic Server applications, or to migrate WebLogic Server applications to microservices.   Oracle provides integrated support of these technologies today, on-premises, or on Oracle Cloud, and with the Oracle Database and Database Cloud Services, and intends to continue to do so in the future. In two webinars delivered on June 25th, one for Asia-Pacific timezones, and one for Europe, Middle East, Africa and Americas timezones, we will be introducing two innovative releases of these technologies - Coherence CE and Helidon 2.0.   We're inviting all to register for the webinars describing the releases, how they can be used, and how they can be used together, including with GraalVM.   If you're interested in leveraging your expertise in Java to deliver lightweight, scalable and reliable microservices that can be integrated into cloud native polyglot environments, come to one of these webinars.   Please register now for one of these webinars, at the time that works for you. June 25, 2020, Webinar for Asia-Pacific time zones June 25, 2020, Webinar for Europe, Middle East, Africa and Americas time zones Looking forward to seeing you there.

This will be a very interesting event for Oracle WebLogic Server users and Java EE and Jakarta EE developers.   Many of you are already familiar with Helidon, which provides a set of Java libraries...

Announcement

Announcing Oracle Support for Helidon for WebLogic and Coherence customers

Over the past few years, container technologies and microservices architectures have offered the opportunity to simplify the runtime infrastructure for applications, and microservices-based applications are being developed widely across companies and industries. About 18 months ago, Oracle announced Project Helidon, an open source, lightweight, fast, reactive, cloud native framework for developing Java microservices. Helidon implements and supports MicroProfile, a baseline platform definition that leverages Java EE and Jakarta EE technologies for microservices and delivers application portability across multiple runtimes. Helidon also supports GraalVM Native Image for much faster startup time and reduced memory footprint. Oracle Support for Helidon is now included with Oracle Support for the following Oracle product offerings: Oracle WebLogic Server Standard Edition Oracle WebLogic Server Enterprise Edition Oracle WebLogic Suite Oracle Coherence Enterprise Edition Oracle Coherence Grid Edition Oracle has defined a Helidon Lifecycle Support Policy that ensures 36 months of support for a new major version of Helidon after it becomes generally available.  During this time period, customers can receive 24x7 Oracle Support for Helidon consistent with the support levels provided for Oracle WebLogic Server and Oracle Coherence. For more details on Oracle Support for Helidon, customers with Oracle Support contracts should refer to My Oracle Support (MOS) Document 2645279.1. To learn more about Helidon, and to try it out, visit: https://helidon.io, and stay posted for future updates on integration across Helidon, Oracle WebLogic Server, Oracle Coherence and GraalVM.  

Over the past few years, container technologies and microservices architectures have offered the opportunity to simplify the runtime infrastructure for applications, and microservices-based...

Announcement

Announcing Oracle WebLogic Server 14.1.1

Oracle is excited to announce the release of Oracle WebLogic Server Version 14.1.1!  Oracle WebLogic Server 14.1.1 is a new major version, adding support for Java Platform, Enterprise Edition (Java EE) 8, and Java SE 8 and 11.   It is supported on-premises and in the cloud, including support and tooling for running Oracle WebLogic Server in containers and Kubernetes, and certification on Oracle Cloud.   We integrate with a wide variety of platforms and Oracle software that deliver high performance and availability for your applications, with low cost of ownership.   Software is available for download from Oracle Technology Network and Oracle Sofware Delivery Cloud and you can view our updated product documentation here. Updated Java EE and Java SE Support Oracle WebLogic Server 14.1.1 includes support for Java EE 8, the latest Java EE version with new APIs for modern enterprise applications.  Servlet 4.0 includes HTTP/2 support, offering improved application performance with compatibility for existing Web applications.   JAX-RS 2.1 advances REST services support by offering a reactive client programming model, and Server-Sent Events for pushing events to application clients. The JSON-P 1.1 and JSON-B 1.0 standards deliver new capabilities for processing JSON documents.   Together with improvements to CDI and Bean Validation, these improvements expand support for building modern applications using the standards-based, proven Java EE platform.  We are currently testing Oracle WebLogic Server for Jakarta EE 8 compatibility as well, and should have results soon. Oracle WebLogic Server’s Java SE strategy is to support Java SE Long Term Support (LTS) releases in new Oracle WebLogic Server versions as they are delivered.    Oracle WebLogic Server 14.1.1 is supported on Java SE 11, so that development teams have the option to use new Java SE 11 capabilities in their applications.   We also continue to support Java SE 8, to give users the option to use the Java SE versions they may already be using for their Oracle WebLogic Server 12.1.3 or 12.2.1.X versions.   Developers will have a choice between two Java SE versions, with long support lifecycles, to use in their Oracle WebLogic Server applications. Cloud Support Oracle WebLogic Server 14.1.1 is certified to run on Docker and CRI-O containers and on Kubernetes, both on-premises and in public clouds, similar to the certification that has been provided with version 12.2.1.X.  The same container-native, open source tools we have implemented to support 12.2.1.X applications on Kubernetes, and that we have enhanced on an ongoing basis over the past year, can also be used with Oracle WebLogic Server 14.1.1.  See links below to: Oracle WebLogic Server Kubernetes Operator Oracle WebLogic Server Deploy Tooling Oracle WebLogic Server Image Tool Oracle WebLogic Server Monitoring Exporter Oracle WebLogic Server Logging Exporter Customers will have many options for deploying 14.1.1 applications on Oracle Cloud Infrastructure, or other Authorized Cloud Environments. New Technology Integration Oracle continues to integrate Oracle WebLogic Server with other technologies to deliver improved application performance and availability with compatibility, investment protection and low cost. Oracle Coherence 14.1.1, also being released today, is fully integrated and certified with Oracle WebLogic Server 14.1.1, offering customers continued performance and availability benefits, along with new Oracle Coherence features.  Oracle WebLogic Server has been certified to run on GraalVM Enterprise Edition 19.3, offering performance benefits of up to 5%-10%.   Integration with the Oracle Database Edition-Based Redefinition (EBR) feature enables coordination of application updates with database changes.  And both Kubernetes and non-Kubernetes users should evaluate the use of the Oracle WebLogic Server Deploy Tooling to simplify domain lifecycle management operations. Platform Support and Upgradeability In addition to Oracle Database and Oracle Java SE, we continue to certify Oracle WebLogic Server with other Oracle technologies, such as Oracle Linux, Oracle Linux Cloud Native Environment, Oracle Private Cloud Appliance, and a long list of third party platforms and databases.   Customers will be empowered to flexibly upgrade existing applications to 14.1.1, where they can leverage 14.1.1 compatibility with prior releases, compatibility with platforms previously supported, and also leverage updated application development and deployment capabilities.   All of this means protection for the investment you make in your Oracle WebLogic Server applications, and options for evolving applications to meet the future needs of your organizations. We invite you to try Oracle WebLogic Server 14.1.1, and also to monitor this blog to track additional announcements that will give you more options for building and deploying your applications on-premises or in the cloud. 

Oracle is excited to announce the release of Oracle WebLogic Server Version 14.1.1!  Oracle WebLogic Server 14.1.1 is a new major version, adding support for Java Platform, Enterprise Edition (Java...

Announcing Oracle WebLogic Server for Oracle Cloud Infrastructure

We’re excited to announce flexible new options to run Oracle WebLogic Server for Oracle Cloud Infrastructure, allowing customers to move their on-premises Oracle WebLogic Server workloads to the cloud, or to build new Java EE applications.  New listings in the Oracle Cloud Marketplace enable you to quickly provision Oracle WebLogic Server configurations using Oracle’s Universal Credits model (UCM).   Oracle WebLogic Server for Oracle Cloud Infrastructure enables you to: Rapidly provision Oracle WebLogic Server configurations Run Oracle WebLogic Server in Oracle Cloud, paying only for what you use Easily develop, deploy and manage Java EE applications The new listings in the Oracle Cloud Marketplace are as follows: Oracle WebLogic Server Enterprise Edition for Oracle Cloud Infrastructure (UCM) enables you to build and deploy Java EE applications and use Oracle WebLogic Server features such as clustering, integration with external databases, and management through the Administration console, WebLogic Scripting Tool (WLST), and Oracle WebLogic Server REST Management APIs.   Oracle WebLogic Server Suite for Oracle Cloud Infrastructure (UCM) includes all of the above, plus Active GridLink for RAC for optimized integration with Oracle Database Real Application (RAC) Clusters and Oracle Coherence Enterprise Edition.   The above listings complement existing listings available on a Bring Your Own License (BYOL) basis: Oracle WebLogic Server Standard Edition for Oracle Cloud Infrastructure (BYOL) Oracle WebLogic Server Enterprise Edition for Oracle Cloud Infrastructure (BYOL) Oracle WebLogic Suite for Oracle Cloud Infrastructure (BYOL) Rapid provisioning You can find the new listings in your Oracle Cloud account by going to Oracle Cloud Marketplace, and searching for “Oracle WebLogic Server”. Or you can filter the listings as indicated below.  The listings with “UCM” in the name are the new listings that are billable via Universal Credits: After selecting one of these images, the Marketplace interface will guide you through a simple UI that will enable you to create an Oracle WebLogic Server domain configuration on Oracle Cloud. Here’s an example screen: Within minutes you have a WebLogic Server configuration running on Oracle Cloud that you can use for developing, testing and running production applications. Flexible Pricing Configurations created from these new UCM listings on Oracle Cloud Marketplace will automatically be metered to measure your usage based on the number of OCPU hours (compute resources) you actually use when running the configuration. You can quickly create, start, stop, restart and destroy development and test environments as needed, or scale configurations up and down in response to production demands.  You will be billed only for the resources you consume, via a single, consolidated bill, with Pay As You Go or Universal Credits Monthly Flex pricing options.   Pricing details are provided here.   Options to use Oracle WebLogic Server licenses via a Bring Your Own License (BYOL) model are also available as an alternative to usage-based pricing.  You may use the BYOL listings to provision Oracle WebLogic Server configurations and run your applications based on Oracle WebLogic Server license and support entitlements, either existing entitlements, or new entitlements you acquire to run Oracle WebLogic Server in the cloud.     Develop and Deploy Applications in the Cloud These configurations run the same Oracle WebLogic Server software that you use on your on-premises systems. It supports the same Java Enterprise Edition (EE) APIs to build Web applications, REST services, JMS and transactional applications and other Enterprise Java applications. Oracle Application Development (ADF) applications are also supported.  You have your choice of Oracle WebLogic Server 10.3.6, 12.2.1.3, and 12.2.1.4 versions, and Java Required Files (JRF) and non-JRF domains.   You can use the same CI/CD practices, WLST scripts, REST management clients and Administration Console being used on-premises, for building, deploying, managing and monitoring WebLogic Server applications in Oracle Cloud, with full compatibility.   You can spread your deployments in clustered configurations for high availability and performance.   You now have a fully compatible development and deployment option in Oracle Cloud for existing and new Oracle WebLogic Server applications. Getting Started To get started you will need an Oracle Cloud account.  If you do not already have one, go here to create a new Free Tier account.  For more information on Oracle WebLogic Server for Oracle Cloud Infrastructure, visit our resources on oracle.com, and review our product documentation.   Then go to the Oracle Cloud Marketplace and test out a new way to run your Oracle WebLogic Server applications in Oracle Cloud.  We’re looking forward to hearing from you.

We’re excited to announce flexible new options to run Oracle WebLogic Server for Oracle Cloud Infrastructure, allowing customers to move their on-premises Oracle WebLogic Server workloads to the...

Announcement

Oracle WebLogic Server 12.2.1.4 is Released

On September 27, Oracle released Oracle WebLogic Server 12.2.1.4 as part of the overall Fusion Middleware 12.2.1.4 release.  Downloads are available for developers here and for production purposes on Oracle Software Delivery Cloud.  This is a patch set release for Oracle WebLogic Server 12.2.1.X, delivered for maintenance purposes, incorporating functional and security bug fixes identified since the Oracle WebLogic Server 12.2.1.3 patch set release.  We have deliberately limited new feature content between Oracle WebLogic Server 12.2.1.3 and 12.2.1.4 in the interest of simplifying adoption of Oracle WebLogic Server 12.2.1.4 by existing Oracle WebLogic Server 12.2.1.X customers.   See the What's New documentation for detailed new feature capabilities in 12.2.1.4.   In general, new feature capabilities are being targeted to the Oracle WebLogic Server 14.1.1 new version release. It is important to note that Oracle WebLogic Server 12.2.1.4 has been designated as a Long Term Support (LTS) patch set release, formerly known as a  terminal patch set release, for WLS 12.2.1.X.  This means that error correction - new patches and Patch Set Updates (PSUs) - will be provided for Oracle WebLogic Server 12.2.1.4 for the remainder of support lifecycle as published in the Oracle Fusion Middleware Lifetime Support Policy - look for "Oracle WebLogic Server 12.2.X" - and as documented in My Oracle Support Document 950131.1.  Customers adopting Oracle WebLogic Server 12.2.1.4 will be able to leverage 12.2.1.4 as their production deployment platform for many years to come.   We recommend that 12.2.1.X users plan on adopting 12.2.1.4 for this reason.    Customers running on Oracle WebLogic 10.3.6 or 12.1.3 should also consider the remaining support lifecycle for these versions and consider planning to upgrade to Oracle WebLogic Server 12.2.1.4.   Dockerfiles and scripts are available in the GitHub project to build and customize Oracle WebLogic Server 12cR2 (12.2.1.4) Docker images.  There are three different Dockerfiles in the project one for the developer distribution, a second for the generic distribution, and a third for the slim distribution.  These images can be used to create multi-server or single server WebLogic domains and have been optimized in order to produce the smallest possible image size.  All WebLogic servers that belong to the same WebLogic domain must be run from the same image.  Support is also provided for environments where Kubernetes and/or the WebLogic Kubernetes Operator is not being used. 1- The WebLogic generic image is supported for development and production deployment of WebLogic configurations using Docker. It contains the same binaries as those installed by the WebLogic generic installer. The WebLogic generic image is primarily intended for WebLogic domains managed with the WebLogic Kubernetes Operator, when WLS console-based monitoring, and possibly configuration, is required. 2- The WebLogic slim image is supported for development and production deployment of WebLogic configurations using Docker. In order to reduce image size, it contains a subset of the binaries included in the WebLogic generic image. The WebLogic console, WebLogic examples, WebLogic clients, Maven plug-ins and Java DB have been removed - all binaries that remain included are the same as those in the WebLogic generic image. The WebLogic slim image is primarily intended for WebLogic domains managed with the WebLogic Kubernetes Operator, when WLS console-based monitoring and configuration is not required, and a smaller image size than the generic image is preferred. If there are requirements to monitor the WebLogic configuration, they should be addressed using Prometheus and Grafana or other alternatives. 3- The WebLogic developer image is supported for development of WebLogic applications in Docker containers. In order to reduce image size, it contains a subset of the binaries included in the WebLogic generic image. WebLogic examples and WLS Console help files have been removed - all binaries that remain included are the same as those in the WebLogic generic image. The WebLogic developer image is primarily intended to provide a Docker image that is consistent with the WebLogic "quick installers" intended for development only. Production WebLogic domains should use the WebLogic generic or WebLogic slim images. So please start targeting your WebLogic Server 12.2.1 projects to WebLogic Server 12.2.1.4 to take advantage of the latest release with the latest features and maintenance. We hope to have more updates for you soon!

On September 27, Oracle released Oracle WebLogic Server 12.2.1.4 as part of the overall Fusion Middleware 12.2.1.4 release.  Downloads are available for developers here and for production purposes on O...

Technical

Creating a WebLogic Domain with Coherence in Kubernetes

Creating a WebLogic domain with a Coherence cluster and deploying it in Kubernetes, then managing the domain life cycle is a difficult task made easier with a combination of the WebLogic Kubernetes Operator and WebLogic Deployment Tooling (WDT). You don’t need to write any WebLogic Scripting Language (WLST) scripts or bash scripts to configure the domain with Coherence, or to activate lifecycle operations. All of this is done for you by the Operator and WDT. To build a domain image, you just provide a short, declarative domain configuration YAML file, then WDT will create and configure the domain for you. WDT will also deploy your artifacts into any domain target. After you have the image, it is easy to create the WebLogic domain in Kubernetes using the Operator. Furthermore, the Operator now supports rolling restart of Coherence managed servers to prevent the cluster from losing data during a domain restart. This article shows you exactly how to create the image, create the domain, and test a rolling restart. Note: This article uses a Coherence cluster that runs as part of a WebLogic domain.  You can also use the Coherence Operator to create and manage  a stand-alone Coherence cluster in Kubernetes.  See https://github.com/oracle/coherence-operator for more information. Before you start You must complete the WebLogic Operator Quick Start Guide to prepare the environment for this sample. Most of the prerequisites are fulfilled in that guide.   Note: This article uses Docker Desktop for Mac for the Docker and Kubernetes environment. Prerequisites: Complete QuickStart Guide WebLogic Operator 2.3.0 WebLogic image at container-registry.oracle.com/middleware/weblogic:12.2.1.3-dev (tagged as oracle/weblogic:12.2.1.3-developer) Kubernetes 1.11.5+, 1.12.3+, 1.13.0+, and 1.14.0+ Docker 18.03.1.ce+ Helm 2.8.2+ Building a WebLogic Domain Docker Image using WDT Let’s get started by building the domain image using WDT. First, clone the Oracle Docker Image project as shown below.  This project has the scripts and files needed to build the image used for this exercise. Prepare your environment to use WDT Create a work directory and clone the Oracle Docker image git repository: mkdir ~/coh-demo cd ~/coh-demo git clone https://github.com/oracle/docker-images.git cd  ~/coh-demo/docker-images/OracleWebLogic/samples/12213-coherence-domain-in-image-wdt The WDT tool was not included in the git repository you just cloned, so use the command below to get the weblogic-deploy.zip file from https://github.com/oracle/weblogic-deploy-tooling/releases. Execute the following command to download WDT version 1.3: curl -v  -Lo ./weblogic-deploy.zip https://github.com/oracle/weblogic-deploy-tooling/releases/download/weblogic-deploy-tooling-1.3.0/weblogic-deploy.zip Inputs needed by WDT Before running WDT, let's view the model YAML file that it needs. The YAML file specifies the domain configuration and the deployment artifacts. The YAML shown below also specifies a CoherenceClusterSystemResource, which is required for a Coherence cluster. Notice that the Coherence cluster is scoped to WebLogic cluster-1. This means that each WebLogic Server in cluster-1 will be a member of the Coherence cluster. This includes new servers that are created during scale out. The domain will be able to scale out to five managed servers, but you can control the actual number of servers that are started when you create the domain in Kubernetes, as you will see later. The appDeployments section specifies the applications being deployed to the domain, as described in the next section.  Following is the WDT model file, cohModel.yaml, you do not need to modify it: domainInfo:     AdminUserName: '@@FILE:/u01/oracle/properties/adminuser.properties@@'     AdminPassword: '@@FILE:/u01/oracle/properties/adminpass.properties@@' topology:     AdminServerName: 'admin-server'     ProductionModeEnabled: true     Log:         FileName: domain1.log     NMProperties:         JavaHome: /usr/java/jdk1.8.0_211         LogFile: '@@DOMAIN_HOME@@/nodemanager/nodemanager.log'         DomainsFile: '@@DOMAIN_HOME@@/nodemanager/nodemanager.domains'         NodeManagerHome: '@@DOMAIN_HOME@@/nodemanager'         weblogic.StartScriptName: startWebLogic.sh     Cluster:         'cluster-1':             CoherenceClusterSystemResource: CoherenceCluster             DynamicServers:                 ServerNamePrefix: 'managed-server-'                 MaxDynamicClusterSize: 5                 CalculatedListenPorts: false                 MaximumDynamicServerCount: 5                 ServerTemplate: 'cluster-1-template'                 DynamicClusterSize: 5     Server:         'admin-server':             NetworkAccessPoint:                 T3Channel:                     PublicPort: 30012                     ListenPort: 30012                     PublicAddress: kubernetes     ServerTemplate:         'cluster-1-template':             ListenPort: 8001             Cluster: 'cluster-1'             JTAMigratableTarget:                 Cluster: 'cluster-1'             SSL:                 ListenPort: 8100 resources:     CoherenceClusterSystemResource:         CoherenceCluster:             CoherenceResource:                 CoherenceClusterParams:                     ClusteringMode: unicast                     ClusterListenPort: 7574 appDeployments:     Application:         'coh-proxy':             SourcePath: 'wlsdeploy/applications/coh-proxy-server.gar'             ModuleType: gar             Target: 'cluster-1' Now, you can build the WDT archive.zip file which includes the coh-proxy-server.gar to be deployed to WebLogic Server. The WDT archive can contain multiple artifacts, such as WAR and EAR files. The Coherence GAR file is the only artifact deployed here, as shown in in the YAML above, because we are using a Coherence proxy running in the domain. The application accessing the cache will be running on a development machine, so the GAR file has only the proxy configuration needed by Coherence.  Build the archive as shown below: ./build-archive.sh Build the image At this point, everything is in place to build the image that will be used to create the WebLogic domain in Kubernetes. WDT will be executed inside of a Docker container to build the domain; it is never executed on your development machine. The Docker build file will copy the WDT zip file to the Docker container, unzip it, and then run WDT to build the domain and deploy the Coherence GAR file. After the docker build is done, you will have an image the contains both the WebLogic Server binary home and the domain home. Let’s run the exact Docker build command shown below. Build the WebLogic  image: docker build -f Dockerfile --no-cache  \   --build-arg CUSTOM_DOMAIN_NAME=sample-domain1 \   --build-arg WDT_MODEL=cohModel.yaml \   --build-arg WDT_ARCHIVE=archive.zip \   --build-arg WDT_VARIABLE=properties/docker-build/domain.properties  \   --force-rm=true   \    -t coherence-12213-domain-home-in-image-wdt . Creating the domain in Kubernetes You can now use the WebLogic Kubernetes Operator to create a domain, using your custom image.  The Operator’s main responsibility is lifecycle management of the domain, including the following: Provisioning (domain creation) Termination (domain deletion) Scale out Scale down Rolling restart Health check and pod auto-restart If you don’t have an Operator environment set up, then follow the Quick Start instructions. The remaining steps in this article require that the Operator is deployed to Kubernetes and the account and secret used for the domain creation exist. Preparing to create a domain The Operator domain YAML file below was generated using the Quick Start instructions. We have made a few changes: Image name is coherence-12213-domain-home-in-image-wdt:latest JAVA_OPTIONS is "-Dtestval=1". This is just a placeholder for the subsequent rolling restart exercise. timeoutSeconds: 300. This will be discussed in later in the article. Create a file named domain.yaml with the following contents: apiVersion: "weblogic.oracle/v4" kind: Domain metadata:   name: sample-domain1   namespace: sample-domain1-ns   labels:     weblogic.resourceVersion: domain-v2     weblogic.domainUID: sample-domain1 spec:   domainHome: /u01/oracle/user_projects/domains/sample-domain1   domainHomeInImage: true   image: "coherence-12213-domain-home-in-image-wdt:latest"   imagePullPolicy: "Never"   webLogicCredentialsSecret:     name: sample-domain1-weblogic-credentials   includeServerOutInPodLog: true   serverStartPolicy: "IF_NEEDED"   serverPod:     shutdown:       timeoutSeconds: 120     env:       - name: JAVA_OPTIONS         value: "-Dtestval=1 "       - name: USER_MEM_ARGS         value: "-XX:+UseContainerSupport -Djava.security.egd=file:/dev/./urandom "       - name: SHUTDOWN_TYPE_ARG         value: "forced"   adminServer:     serverStartState: "RUNNING"     adminService:       channels:         - channelName: default           nodePort: 30701   clusters:     - clusterName: cluster-1       serverStartState: "RUNNING"       replicas: 2 Create the domain Run kubectl apply to create the domain and Kubernetes will notify the Operator that a domain resource is being configured. Notice the kind: Domain field in the YAML file. This identifies the resource as a domain Kubernetes Custom Resource, which is managed by the Operator. The Operator will create the Kubernetes resources needed to provision the WebLogic domain, including a pod for the administration server and a pod for each managed server. The replicas: 2 field in the YAML file tells the Operator to create two managed servers. Let’s execute the following command to create the domain. Create the WebLogic domain in Kubernetes: kubectl apply -f ./domain.yaml You can monitor the progress of the domain creation by checking the status of the pods. When the READY field is 1/1 and the STATUS is Running, then the WebLogic Server on that pod is ready. The Operator uses a Kubernetes readiness probe to determine when the pod is ready, and a liveness probe to determine if the server is healthy. See Kubernetes Probes for more information. It takes a few minutes to fully start the domain; then we can move to the next section. Check the status of the domain pods: kubectl get pod -n sample-domain1-ns ... NAME                              READY   STATUS    RESTARTS   AGE sample-domain1-admin-server       1/1     Running   0          5h22m sample-domain1-managed-server-1   1/1     Running   0          5h20m sample-domain1-managed-server-2   1/1     Running   0          5h18m You may notice that you didn’t need to do any network configuration for Coherence. That’s because the WebLogic Managed Coherence Server automatically generates a Well Known Address (WKA) list and configures Coherence to use it along with UNICAST networking. Even when you scale out the cluster, the new server will automatically be configured with the WKA list. Running the Coherence proxy client At this point, you have a WebLogic domain deployed with a Coherence cluster running. The Coherence coh-proxy-server.gar file that was deployed configures Coherence to accept a proxy connection and provide access to a distributed service where a cache can be created. Because the service is configured with autostart=true, it is up and running and ready to be used. As mentioned previously, the cache service is restricted to the servers in WebLogic cluster-1. Loading and validating cache Build the Coherence proxy client program which boths load the cache and and verifies the contents. cd coh-proxy-client mvn package -DskipTests=true Because we are running the application from a development machine outside the Kubernetes cluster, a service is required to provide access to the Coherence cluster. The kubectl expose command can be used for this. Create a service to expose the Coherence proxy, which is listening on port 9000: kubectl expose pod sample-domain1-managed-server-1 -n  sample-domain1-ns --port=9000 --target-port=9000 --type=LoadBalancer --name=coh-proxy-ms1 Now we are ready to access the Coherence cluster. Let’s load 10,000 entries, where each entry has a 1k value and an integer key. Run the proxy client application to load the cache: java -jar target/proxy-client-1.0.jar load ... Loading cache Cache size = 10000 SUCCESS Load Test - elapsed Time = 2 seconds Next, we will validate the contents of the cache by reading every cache entry and checking the value. Run the proxy client application to validate the cache: java -jar target/proxy-client-1.0.jar validate ... Cache size = 10000 SUCCESS Validate Test - elapsed Time = 3 seconds More lifecycle operations So far, we have created a domain image using WDT, provisioned a WebLogic domain in Kubernetes, and tested the Coherence cluster by loading and validating cache data. Now let’s see how easy it is to scale out and perform a rolling restart. Scale-out the domain Creating a new managed server is called scale out and removing a server is called scale down. To scale out or scale down, just modify the domain YAML file, change the replicas count, and then apply the configuration. The Operator will get notified and see that the desired replicas count is different than the current count, and then either add or remove a pod. For scale out, the Operator will create a new pod and the WebLogic managed server on that pod will be started. The new server will join the Coherence cluster and Coherence will move the primary and backup data partitions between cluster members to balance the cluster. All this happens automatically and seamlessly, with no service interruption. Let’s scale out the cluster now. Modify domain.yaml and increase the replicas count to three: ... replicas: 3 Apply the new configuration and get the pods: kubectl apply -f ./domain.yaml kubectl get pod -n sample-domain1-ns ... NAME                              READY   STATUS    RESTARTS   AGE sample-domain1-admin-server       1/1     Running   0          5h22m sample-domain1-managed-server-1   1/1     Running   0          5h20m sample-domain1-managed-server-2   1/1     Running   0          5h18m sample-domain1-managed-server-3   0/1     Running   0          8s You can see that sample-domain1-managed-server-3 was created and is starting up. Domain rolling restart The WebLogic Kubernetes Operator provides a rolling-restart feature that automatically recycles servers in the domain, one at a time. This is a critical feature in general, because it minimizes service interruptions. It is especially important when using Coherence, to prevent data loss. Coherence is typically configured to store backup data partitions on a different cluster member than the member that owns the primary data. That way, if a cluster member goes down, the data is still safe on the backup partition. However, if both the primary and backup partitions are lost at the same time, then you will lose the data. Furthermore, even if you are only shutting down a single server, you must wait until it is safe to do so. For example, all the cluster members could be running, but Coherence might be shuffling partitions around the cluster. Coherence provides an HAStatus MBean that will let you know if it is safe to shut down a server. If the status is ENDANGERED. then the server must not be shut down. Operator shutdown script The Operator includes a set of scripts that run in the pod and perform both startup and shutdown of the domain servers. If the domain has a Coherence cluster configured, then the shutdown script will check the HAStatus and wait until it is safe to shut down the server. This script is configured as a Kubernetes pre-stop hook and is invoked by Kubernetes during pod deletion. Kubernetes will wait a certain amount of time for the script to complete, then it will destroy the pod if that time expires. If the shutdown script returns before the timeout, then Kubernetes will proceed to delete the pod. This timeout is configured by the field, shutdown.timeoutSeconds in the Operator domain YAML file. If you have a Coherence cluster, you should set this timeout to a high value, like fifteen minutes, to provide ample time for the cluster to be safe in the worst-case scenario where the shutdown scripts cannot get the Coherence MBEAN. This is extremely unlikely to ever happen, and the long timeout value won’t affect performance if the MBean is accessible and the script can detect that Coherence is safe to shut down. Let’s activate a rolling-restart by changing the JAVA_OPTIONS testval value to 2. After the new configuration is applied, the Operator will detect the change and begin a rolling restart. Modify domain.yaml and change the testval system property to two: ... value: "-Dtestval=2" Now apply the YAML file to begin the rolling-restart: kubectl apply -f ./domain.yaml Checking the restart status You can see in the three output sections below, that the rolling restart begins with the administration server, then moves on to managed-server-1. The pod is deleted (Terminating STATUS), then recreated (ContainerCreating STATUS). After the administration server is running again, one of the managed servers will be recycled. Note that the order of managed server recycling is indeterminate. Get the pods: kubectl get pod -n sample-domain1-ns NAME                              READY   STATUS        RESTARTS   AGE sample-domain1-admin-server       1/1     Terminating   0          5h33m sample-domain1-managed-server-1   1/1     Running       0          5h30m sample-domain1-managed-server-2   1/1     Running       0          5h28m sample-domain1-managed-server-3   1/1     Running       0          6m5s Get the pods: kubectl get pod -n sample-domain1-ns NAME                              READY   STATUS              RESTARTS   AGE sample-domain1-admin-server       0/1     ContainerCreating   0          1s sample-domain1-managed-server-1   1/1     Running             0          5h31m sample-domain1-managed-server-2   1/1     Running             0          5h29m sample-domain1-managed-server-3   1/1     Running             0          7m13s Get the pods: kubectl get pod -n sample-domain1-ns NAME                              READY   STATUS        RESTARTS   AGE sample-domain1-admin-server       1/1     Running       0          79s sample-domain1-managed-server-1   1/1     Terminating   0          5h33m sample-domain1-managed-server-2   1/1     Running       0          5h30m sample-domain1-managed-server-3   1/1     Running       0          8m31s As part of this exercise, we ran the cache validate client after the rolling restart to validate that no data was lost. Summary This wraps up our discussion of creating a WebLogic domain with Coherence in Kubernetes. The powerful features of both the WebLogic Operator and WebLogic Deployment Tooling show you how easy it is to create and deploy a complex domain. In addition, the Operator makes it very simple to do post-provisioning lifecycle operations. We hope this article is helpful and we look forward to your feedback. See the references below for more information. References WebLogic Operator on GitHub WebLogic Operator Documentation WebLogic Operator Domain Specification WebLogic Server Deploy Tooling on GitHub Managed Coherence    

Creating a WebLogic domain with a Coherence cluster and deploying it in Kubernetes, then managing the domain life cycle is a difficult task made easier with a combination of the WebLogic Kubernetes...

Announcing the Oracle WebLogic Server and Oracle Coherence 14.1.1 Beta Program!

As discussed in earlier blogs, we have been working on new releases of WebLogic Server and Oracle Coherence.  Today we are pleased to announce a beta program for WebLogic Server and Coherence 14.1.1.   This is a new major version release for both products, and the follow-on version the current WebLogic Server 12.2.1.X version, which we continue to support, and for which a 12.2.1.4 maintenance release is planned shortly.   We do not plan to release a “13” version. WebLogic Server 14.1.1 is intended to support Java EE 8 and Jakarta EE 8 (which is fully compatible with Java EE8), and enhancements in feature areas such as REST, JSON processing, CDI and HTTP/2 support provided in these standards. Coherence 14.1.1 is intended to support new features such as topic-based messaging, distributed tracing support, REST management, and Helidon MP integration.  We will continue to enhance support for these products running in Kubernetes, support for both standalone and integrated WebLogic Server and Coherence deployments, and are investigating potential GraalVM support.   Finally, both products will support both Java SE 8 and Java SE 11, so you will be able to use each of these products on either Java SE version. We are initiating this beta program in order to make the new version available to our customers for evaluation and feedback.  Oracle business practices require that we validate that beta users are existing licensees of WebLogic Server or Coherence (or both), so we are inviting customers to request beta program participation by completing the Beta Testing Candidate Profile Form at https://pdpm.oracle.com.  You will need to sign in using your Oracle login, or register for a new login on the site.   On the form itself you will need to provide some basic company and contact information.    To identify your interest in the WebLogic Server and/or Coherence beta program, please select the “Fusion Middleware” box in item 7, and “WebLogic Server” and/or “Coherence” from the product list that will be subsequently presented below. We will review your candidacy, and assuming proper validation will be sending you download instructions soon.   Thanks for your participation! The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, timing, and pricing of any features or functionality described for Oracle’s products may change and remains at the sole discretion of Oracle Corporation.

As discussed in earlier blogs, we have been working on new releases of WebLogic Server and Oracle Coherence.  Today we are pleased to announce a beta program for WebLogic Server and Coherence 14.1.1. ...

Technical

Automating WebLogic Deployment - CI/CD with WebLogic Tooling

WebLogic Server has been certified to run on Docker Containers and Kubernetes.  Tooling has been developed to make it easy for customers to migrate their WebLogic applications to run in Kubernetes. The tooling allows users manage their domains, update their domains and applications, monitor them, persist the logs, automate the creation and patching of images which enables the automation of WebLogic deployments through CI/CD processes.  The WebLogic Kubernetes Operator A Kubernetes Operator is an application specific controller that extends the Kubernetes API to create, configure, and manage instances of complex applications. We are adopting the Operator pattern and using it to integrate WebLogic Server and Kubernetes, allowing Kubernetes to serve as a container infrastructure hosting WebLogic Server instances. The WebLogic Kubernetes Operator extends Kubernetes to create, configure, and manage a WebLogic domain.  Find the GitHub project for the WebLogic Kubernetes Operator and documentation that includes a Quick Start guide and samples. The WebLogic Kubernetes Operator contains built-in knowledge about how to perform lifecycle operations on a WebLogic Server domain. For example, it knows the order in which it needs to perform a rolling restart of the domain, min order to  preserve the guarantees of a WebLogic Server cluster such as session replication and service migration. Some of the operations performed by the operator are: Provisioning a WebLogic domain. Life cycle management Updates Scaling and shrinking the WebLogic cluster Security by defining RBAC roles Creates Kubernetes services for intra and extra Kubernetes cluster communication of the pods, and for load balancing requests to the managed server in the WebLogic cluster. The WebLogic Deploy Tooling The Oracle WebLogic Deploy Tooling (WDT) makes the automation of WebLogic Server domain provisioning and applications deployment easy. Instead of writing WLST scripts that need to be maintained, WDT creates a declarative, metadata model that describes the domain, applications, and the resources used by the applications.  This metadata model makes it easy to provision, deploy, and perform domain lifecycle operations in a repeatable fashion, which makes it perfect for the Continuous Delivery of applications. The GitHub project for the WebLogic Deploy Tooling  you can find the documentation and samples.  Read the blog Make WebLogic Domain Provisioning and Deployment Easy! which walks you through a complete sample. WebLogic Deploy Tooling enables you to introspect a domain configuration and the application binaries and then create the domain in a Docker image or on a volume.  WDT supports introspecting 10.3.6, 12.1.3, 12.2.1.X domains and then create these domains in WebLogic 12.2.1.X. When invoking WDT discover, two files are created: A yaml model of the domain configuration.  An archive containing the application binaries. Once the yaml model has been created you can customize and validate your configuration to meet Kubernetes requirements. Invoking WDT create takes the domain yaml model and archive and creates a domain home with the application deployed. You can find a sample of creating a Docker image with the domain home and application deployed to it with WDT at GitHub Sample. The Image Tool The WebLogic Image Tool is an open source tool that allows you to automate building, patching, and updating your WebLogic Server Docker images, including your own customized images.  This tool can be scripted and used in CI/CD processes. Find the WebLogic Image Tool GitHub project at https://github.com/oracle/weblogic-image-tool.  There are four major use cases for this tool: Create a customized WebLogic Server Docker image where the user can choose: The OS base image (e.g. Oracle Linux 7.5). The version of Java (e.g. 8u202). The version of WebLogic Server or Fusion Middleware Infrastructure (FMW Infrastructure) installer (e.g. 12.1.3, 12.2.1.3). A specific Patch Set Update (PSU). One or more interim or “one-off” patches. Patch a base install image of WebLogic. Patch and build a domain image of WebLogic or FMW Infrastructure using a WebLogic Deploy Tool model. Deploy a new application to an already existing domain image. Using the Image tool, you can incorporate the use cases above into an automated process for patching and updating all of your WebLogic infrastructure and applications running in Docker and Kubernetes.. Oracle recommends that you apply quarterly Patch Set Updates (PSU) to your WebLogic Server binaries, latest Java update, and latest OS update as these updates contain important security fixes! The Image Tool leverages an important new capability built into My Oracle Support (MOS) that provides a REST API for specifying and downloading patches.  The Image Tool automatically downloads all the one-off patches and PSUs you specify from My Oracle Support (MOS) using this REST API, including updates to OPatch, if required. The tool checks for patch conflicts invoking MOS APIs.  You must provide the MOS credentials with the necessary support entitlements, and manually download the WebLogic or Java installers before invoking the tool.  Patches and installers are cached to prevent having to download them multiple times. The Image Tool follows Docker best practices to automatically build an image with the recommended image layering, following best practices, and performing cleanup to minimize image size. The tool ensures that the image remains patchable and only uses standard and publicly documented Oracle tools and APIs.  We have posted a YouTube video which demonstrates using the WebLogic Image Tool to create a customized WebLogic Server install image. If you are interested in learning about how to use these images to automate your CI/CD processes to deploy WebLogic domains in Kubernetes, please refer to our documentation and a demonstration YouTube video using Jenkins. The Monitoring Exporter The WebLogic Monitoring Exporter exposes WebLogic Server metrics that can be read and collected by monitoring tools such as Prometheus, and displayed in Grafana dashboards. The WebLogic Monitoring Exporter tool available in open source here. As it runs, WebLogic Server generates a rich set of metrics and runtime state information that provides detailed performance and diagnostic data about the servers, clusters, applications, and other resources that are running in a WebLogic domain.  The WebLogic Monitoring Exporter enables administrators of Kubernetes environments to easily monitor this data using tools like Prometheus and Grafana, tools that are commonly used for monitoring Kubernetes environments. Now that the entire metric footprint of WebLogic is being exported to Prometheus you can scale the WebLogic Server cluster by setting rules in Prometheus based on the metrics. When the rule is met, Prometheus calls onto the WebLogic Operator to scale the WebLogic cluster. For more information on the design and implementation of the WebLogic Monitoring Exporter, see Exporting Metrics from WebLogic Server. For more information on Using Prometheus and Grafana to Monitor WebLogic Server on Kubernetes see WebLogic on Kubernetes monitoring using Prometheus and Grafana. Try the end-to-end sample which shows you how to monitor your WebLogic domain running in Kubernetes with Prometheus and out of the box Grafana dash boards. The sample also shows you how to set alerting rules in Prometheus. When alert conditions are met, the Prometheus server fires alerts which can send notifications accordingly to various receivers like email, Slack, webhook, and such. The Logging Exporter The WebLogic Logging Exporter exports WebLogic server logs directly from the WebLogic servers to the Elastic Stack. Logs can be analyzed and then displayed in Kibana dashboards. You can find the logging exporter code in its GitHub project and  sample. Automating WebLogic Deployments with CI/CD and WebLogic Tooling The combination of these tools enables the automatic deployment of application updates, patching, or changes to the domain configuration through a CI/CD process. In the image bellow I describe a scenario where a user automates an application update deployment to a running WebLogic domain through a CI/CD pipeline using the tools.  In a CI/CD process the pipeline is triggered by a check-in of the deployment archive containing the new version of the application binaries and the WDT model with configuration changes required for the deployment of the new application.  In step one of the pipeline the Image Tool update is invoked passing in the WDT model, the deployment archive, and the name of the base domain image where the new application will be deployed.  The Image Tool creates a version2 of the domain image by extending the original domain image from the repository and invoking the WebLogic Deploy Tool to deploy the new application.  The Image Tool then pushes the version2 of the domain image to the local repository. In step two, the pipeline edits the domain.yaml with the new domain image name version2.  The domain.yaml is used to create the Domain Custom Resource, which is a data structure representation of the WebLogic domain in Kubernetes. The Domain Custom Resource describes the desired running state of the WebLogic domain in Kubernetes and extends Kubernetes APIs to begin serving the custom resource object. Users will want to version control their domain.yaml in case they might need to rollback to the previous state of the Domain Custom Resource. In step three, the pipeline invokes “kubectl apply –f domain.yaml”. This will cause a change in the Domain Custom Resource to define a new image which WebLogic Server pods/containers are based on. When the Operator observes a change in the image name of the Domain Custom Resource, it triggers a rolling restart of the WebLogic domain.  The Operator will shutdown the servers in a graceful manner to allow for service migration or session data replication to other servers of the WebLogic cluster and thus guarantee high availability during the roll out of the updates. Changes to the WebLogic deployments and domain configuration can be visualized in Prometheus and the Grafana dashboards. Users can also visualize the updates in the WebLogic server logs being exported to the Elasticstack and can be displayed in Kibana dashboards. Summary The WebLogic team continues enhancing these tools, to facilitate and automate updating WebLogic deployments while offering the least interruptions to the running applications. Some features in the Roadmap of the tools are: A UI for the WebLogic Kubernetes Operator Support of Istio service mesh Monitoring applications Exporting application and access logs Support of other logging tools like FluentD and Logstash Support of the tools for new versions of WebLogic Usability and Performance improvements We encourage you to try these tools to help you manage WebLogic deployments in Kubernetes.  Please stay tuned for more information. We hope this blog is helpful to those of you seeking to patch, update, and deploy through an automated CI/CD process, and look forward to your feedback.   The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, timing, and pricing of any features or functionality described for Oracle’s products may change and remains at the sole discretion of Oracle Corporation.    

WebLogic Server has been certified to run on Docker Containers and Kubernetes.  Tooling has been developed to make it easy for customers to migrate their WebLogic applications to run in Kubernetes....

Announcement

WebLogic announces support for CRI-O container runtime

At Oracle OpenWorld, we announced that we will now support WebLogic Server on additional container runtimes with Kubernetes.  Up to now we have supported only Docker as the container runtime.  Starting today, we will support both Docker and CRI-O.  We will continue to evaluate and update our position based on customer demand and industry trends.   Why are we making this change?   There are several trends driving this change:   CRI-O is now part of Kubernetes and is very lightweight.  CRI-O is runtime agnostic - it is a framework that permits the implementation of different deployment options for containers.  Because it is fully integrated with Kubernetes, such implementations can be done with limited user impact.  For example, Oracle Linux supports runC and Kata Containers today.  In the future it would be possible to support emerging projects like firecracker if that became desirable.  We currently support and certify WebLogic on OpenShift 3.11.  OpenShift 4 has moved to Red Hat Enterprise Linux 8/Red Hat CoreOS, and these operating systems have deprecated Docker support [9] and replaced it with CRI-O [1] and runc as the container runtime, along with tools including Podman [2] (replacing the docker CLI), Buildah [3] (for building images) and Skopeo [4] (for copying images and interacting with registries).  In order to support/certify WebLogic on OpenShift 4, we need to also support CRI-O.  Oracle Linux 8 has also added support for Podman, Buildah and Skopeo container tools [8], as RHEL8 is its upstream distribution.  Kata Containers [6] with CRI-O is generally considered to provide better isolation and address the main security concerns that have been expressed with Docker’s daemon-based architecture.  CRI-O is a CNCF project, and Oracle is a Platinum member of CNCF.  Docker is a private company who have commercial and open source offerings.  The Open Container Initiative (OCI) [5] is hosted by the Linux Foundation and manages the specification for containers.  Adding support for CRI-O is a continuation of Oracle’s support of open standards managed by CNCF and Linux Foundation and reinforces our commitment to open standards and open source in general.   Docker is still widely popular.  We are not dropping support for Docker.  We are simply adding support for CRI-O in addition to Docker.  What is CRI-O?   From their site:   CRI-O is an implementation of the Kubernetes CRI (Container Runtime Interface) to enable using OCI (Open Container Initiative) compatible runtimes. It is a lightweight alternative to using Docker as the runtime for Kubernetes. It allows Kubernetes to use any OCI-compliant runtime as the container runtime for running pods. Today it supports runc and Kata Containers as the container runtimes but any OCI-conformant runtime can be plugged in principle.  CRI-O supports OCI container images and can pull from any container registry. It is a lightweight alternative to using Docker, Moby or rkt as the runtime for Kubernetes.  To a large extent, this means that CRI-O is a drop in replacement for Docker.  From the point of view of a Kubernetes operator/application workload, there is almost no discernible difference between Kubernetes running with Docker and Kubernetes running with CRI-O.  All of the Kubernetes functions work the same.    We have completed all of our testing and certification testing for WebLogic Kubernetes Operator 2.3.0 and WebLogic 12.2.1.3.0 on Kubernetes with Kata and CRI-O, including running our full integration test suite successfully.  There was literally no code change required.     The creation and management of Docker/OCI images would be done with different tools.  The images and containers these tools create/manipulate are 100% compatible with those created by Docker.  In fact they are interchangeable.  Most of these tools also try to have the same command line options as the Docker CLI to ease migration.   Podman replaces Docker CLI for running containers, etc.  Buildah replaces Docker build for creating images  Skopeo provides tooling to search and manage docker registries (which Docker is weak at)  crictl replaces Docker pull for pulling images from remote Docker registries  At the lower level, each pod/container is run in a very lightweight QEMU VM by Kata Containers.  The Docker daemon (which runs as root and is the main security issue cited with Docker) is completely removed from the picture.    It is important to note that you do not need to create new images - the existing Docker images work as-is with CRI-O.  So the overall layering looks like this:    Existing supported stack  New additionally supported stack  Kubernetes/Docker  Kubernetes  containerd  CRI-O  runC  kata  container-kernel  container-kernel  On the left we have the currently supported runtime stack.  On the right is the stack that we will start to support in addition to the one on the left.  In addition to the runtime, there are the tools mentioned above to build, tag, push/pull images, etc.  Several references are included below if you want to learn more.  If you want to get into the real nitty gritty detail - this is an excellent read: https://bit.ly/2mcJd1n  References  [1] CRI-Oo https://cri-o.io/  [2] Podman https://podman.io  [3] Buildah https://buildah.io  [4] Skopeo https://github.com/containers/skopeo  [5] Open Container Initiative https://www.opencontainers.org/  [6] Kata https://github.com/kata-containers  [7] Docker security https://superuser.openstack.org/articles/how-kata-containers-boost-security-in-docker-containers/  [8] Oracle Linux support statement https://docs.oracle.com/cd/F12552_01/F12584/html/ol8-features-container.html  [9] RedHat support statement https://access.redhat.com/solutions/3696691  [10] runC, containerd and Docker Engine architecture https://www.slideshare.net/PhilEstes/diving-through-the-layers-investigating-runc-containerd-and-the-docker-engine-architecture  [11] What is containerd? https://blog.docker.com/2017/08/what-is-containerd-runtime/  [12] CRI-O vs containerd http://crunchtools.com/competition-heats-up-between-cri-o-and-containerd-actually-thats-not-a-thing/             

At Oracle OpenWorld, we announced that we will now support WebLogic Server on additional container runtimes with Kubernetes.  Up to now we have supported only Docker as the container runtime....

The WebLogic Server

End to end example of monitoring WebLogic Server with Grafana dashboards on the OCI Container Engine for Kubernetes

In previous blogs, we described how to run WLS on Kubernetes with the Operator using the Oracle Cloud Infrastructure (OCI) Container Engine for Kubernetes and how to set up the Monitoring Exporter with Prometheus and Grafana.  In this blog, we will demonstrate the steps to set up and monitor WebLogic Server runtime metrics and corresponding Grafana dashboards for servers, web applications, data sources and JMS services on the OCI Container Engine for Kubernetes (OKE). Prerequisites A workstation with Docker and kubectl installed and configured. The Oracle Container Engine for Kubernetes (OKE). To set up a Kubernetes managed service on OCI, follow the documentation, Overview of Container Engine for Kubernetes. Have Helm installed, version 2.14.0 or later. Clone WebLogic Monitoring Exporter project from GitHub. Step-by-Step Guide This end-to-end sample shows you the steps to set up monitoring for WebLogic domains using Prometheus and Grafana. When completed, you'll have Prometheus, Grafana, and WebLogic domains installed, configured, and running. This sample includes Grafana dashboards to visualize the WebLogic Server runtime metrics.  Before getting started, clone the WebLogic Monitoring Exporter project from GitHub. It contains many of the samples and helpers we will need. git clone https://github.com/oracle/weblogic-monitoring-exporter.git Here is an overview of the process: Configure the persistent volume. Prepare and run a WebLogic domain and operator. Set up MySQL Server. Install the WebLogic Kubernetes Operator. Run a WebLogic domain. Set up Prometheus. Set up Grafana. Configuring the Persistent Volume In this sample, we need to create four persistent volumes (PV) and persistent volume claims (PVC) to store data for MySQL, Prometheus server, Prometheus Alert Manager, and Grafana.  High-Level Steps: Create a target mount. Create three file systems, one per persistent volume: MySQL, Prometheus, Prometheus Alert Manager. Create a storage class. Create a persistent volume (PV). Create a persistent volume claim (PVC). Let’s review all these steps in detail. Log in to the OCI console. Verify that you are using the correct region, for example, “us-phoenix-1” and department, for example“Quality Assurance”.  Select‘Menu’->’Developer Services’->’Container Clusters (OKE)’ and verify that your OKE cluster is up and running.  Select ‘Menu’-> ‘Networking’ -> ‘VNC’. Select the VCN assigned to your cluster and note the CIDR range, for example ’10.7.0.0/16’ From the OCI Menu, select ‘File Storage’ -> ‘Mount Targets’ Click on ‘Create Mount Target’ and select the same Virtual Cloud Network (VNC) as your cluster, for example: mkcluster_vcn.  Click on the created Mount Target link, create three file systems by using the ‘Create Export’ button under the ‘Exports’ section. Create three File Systems, each per desired PV.   From the OCI Menu, select ‘Networking’ -> ‘Virtual Cloud Networks’.  Choose the VCN assigned to your cluster, for example mkcluster_vcn. On the left of Security Lists, select -> ‘Create Security List’, and create new Security List, for example mkcluster-default-security-list. Create four Ingresses (two TCP [2048-2050, 111] and two UDP [2048, 111) destination port) and three Egresses (two TCP [2048-2050, 111] and one UDP [111] source ports). The IP range should match the CIDR range in the corresponding VCN, in our example it is ‘10.7.0.0/16’. Here is an example of the created Ingress Rules:   Here is an example of the Egress Rules: Create the Storage Class with provisioner: oracle.com/oci-fss and the OCID from the Mount Target (for example: ocid1.mounttarget.oc1.phx.abcdby27ve3wk6bobuhqllqojxwiotqnb4c2ylefuzabcd) Here is the example of storageclass.yaml: Execute this command to create the Storage Class: kubectl apply -f storageclass.yaml Setting up MySQL Server Modify the file weblogic-monitoring-exporter/samples/kubernetes/end2end/mysql/persistence.yaml to use the NFS type for Persistent Volume (PV) . Use your Mount Target IP address (for example: 10.7.10.4) and the File System name (for example: /FileSystem-20190806-2144) for the root of the persistent volume. Use the created Storage Class for PV and PVC (for example: oci-fss).  Here is the example modified weblogic-monitoring-exporter/samples/kubernetes/end2end/mysql/persistence.yaml file from the end-to-end sample: Push mysql image to the OCIR registry by executing these steps:  docker pull mysql  docker tag mysql:latest phx.ocir.io/weblogick8s/db/mysql:5.7  docker login phx.ocir.io  Username: weblogick8s/[OCIUsername]  Password: [Auth Key of OCI]  docker push phx.ocir.io/weblogick8s/db /mysql:5.7 Create the OCIR secret: kubectl create secret docker-registry ocirsecret --docker-server=phx.ocir.io --docker-username=weblogick8s/[OCIRUsername] --docker-password=[OCIRPassword] --docker-email=[OCIRDockerEmail] -n default Modify the file weblogic-monitoring-exporter/samples/kubernetes/end2end/mysql/mysql.yaml to add a securityContext section to add permission for the user mysql, add OCIR image info, and pull secret: Follow the instructions from the end-to-end sample to deploy PV and PVC and create MYSQL database, using the modified mysql.yaml and persistence.yaml files. Installing the WebLogic Kubernetes Operator and WebLogic Server Domain Follow the instructions from the end-to-end  sample to create WebLogic Kubernetes Operator. In our example, we will use the ‘default’ namespace for the WebLogic domain and ‘monitoring’ namespace for running Prometheus and Grafana. Use helm to install the Traefik load balancer. Use the values.yaml in the sample but set kubernetes.namespaces specifically. Go to the OCI dashboard -> ‘Networking’ -> ‘LoadBalancers’, select your compartment, and verify that the instance was created and running. Note the public IP address created for the load balancer, for example ‘129.146.192.196’. Ask your DNS provider to create three host names and match them with this IP address. For our example, we have created: e2e-wls.weblogick8s.org to access WebLogic Domain e2e-prom.weblogick8s.org to access Prometheus dashboard e2e-grafana.weblogick8s.org to access Grafana dashboard In OKE, we will be using the Traefik load balancer. To run WebLogic Monitoring Exporter with the load balancer, we need to add an extra property into the configuration file. Modify the weblogic-kubernetes-monitoring-exporter/samples/kubernetes/end2end/dashboard/exporter-config.yaml configuration file to add the restPort info. Insert this line above ‘queries’: restPort: 8001                                                          . Follow the instructions from the end-to-end  sample to create Domain Image. Push the created Domain Image to OCIR registry: docker tag domain1-image:1.0 phx.ocir.io/weblogick8s/endtoend/domain1-image:1.0 docker push phx.ocir.io/weblogick8s/endtoend/domain1-image:1.0 For the domain deployment, replace the image name in the weblogic-monitoring-exporter/samples/kubernetes/end2end/demo-domains/domain1.yaml file to reflect the created domain image name in the OCIR, for example:  image: phx.ocir.io/weblogick8s/endtoend/domain1-image:1.0 Deploy the rules for Traefik path routing:   Verify that you can access the WebLogic domain console page via a browser, using this URL: http://e2e-wls.weblogick8s.org/console Verify that the wls-exporter application was deployed and accessible, using this URL: http://e2e-wls.weblogick8s.org/wls-exporter Make sure that you can see the generated metrics by clicking on the ‘metrics’ link from the exporter console or by using this URL: http://e2e-wls.weblogick8s.org/wls-exporter/mertics Setting up Prometheus In the end-to-end sample, we have provided the instructions on how to start Prometheus using a hostPath Persistent Volume (PV) . For an OKE environment, let’s modify the persistent volume and persistent volume claims configuration to use the NFS type: Like the MySQL, PV, and PVC configurations, edit the provided end-to-end sample YAML file, weblogic-monitoring-exporter/samples/kubernetes/end2end/prometheus/persistence.yaml, to replace hostPath with nfs. Use your Mount Target IP address (for example: 10.7.10.4) and the /FileSystem name (for example: /FileSystem-20190806-2142) for the root of the Persistent Volume Modify the weblogic-monitoring-exporter/samples/kubernetes/end2end/prometheus/alert-persistence.yaml file in the same way, for Prometheus Alert Manager PV; we use  “/FileSystem-20190806-2140”: Follow the instructions from the end-to-end  sample for Setting Up Prometheus to install all deployments. Check that all Prometheus pods in the monitoring namespace are running. In order to be able to access Prometheus dashboard in OKE, redeploy the Prometheus service to switch to the ClusterIP type: kubectl delete svc prometheus-server -n monitoring Use this override.yaml file to redeploy it: kubectl apply -f override.yaml Finally, create the Ingress for Prometheus by using this command: Now you can access the Prometheus Dashboard by using this URL: http://e2e-prom.weblogick8s.org Go to ‘Status’->’Targets’ to verify that the WebLogic Domain targets are up and running. Setting up Grafana Install Grafana using a Helm chart with the customized values.yaml file. Modify the provided end-to-end sample values.yaml file to use the internally created Persistent Volume and Persistent Volume Claim. Also switch to the ClusterIP service. Here is the sample of the modified values.yaml: Create the Grafana administrative credentials: kubectl --namespace monitoring create secret generic grafana-secret --from-literal=username=admin --from-literal=password=12345678 Install the Grafana chart: helm install --wait --name grafana --namespace monitoring --values grafana/values.yaml stable/Grafana Create Grafana ingress by executing this command:   Now you should be able to access the Grafana dashboard by using this URL: http://e2e-grafana.weblogick8s.org To complete the Grafana configuration, you need to create the data source and WebLogic dashboard. Here is the example how to do it using the Grafana REST API. Create the Prometheus data source with the predefined JSON file datasource.json by executing this command:   Create the WebLogic Server Dashboard with the predefined JSON dashboard.json file. Log in to the Grafana dashboard, select WebLogic Server Dashboard and check all the available metrics Click on 'Servers', 'Web Applications', 'Data Source', 'JMS Services' to view the provided panels: Setting up a Webhook And Firing Alerts Follow the instructions from the end-to-end sample to set up a Webhook. The only difference for an OKE environment is to push the created Webhook image to the OCI Registry and use that image in the deployment configuration: docker tag webhook-log:1.0 phx.ocir.io/weblogick8s/endtoend/webhook-log:1.0 docker push phx.ocir.io/weblogick8s/endtoend/webhook-log:1.0 Modify the image name in  weblogic-monitoring-exporter/samples/kubernetes/end2end/webhook/server.yaml to  phx.ocir.io/weblogick8s/endtoend/webhook-log:1.0 After verifying that your Wehhook deployment is up and running, follow the instructions from the end-to-end sample to fire Alerts. Summary In this blog, we have demonstrated all the required steps to set up and run an end-to-end sample that sets up monitoring for WebLogic domains using Prometheus and Grafana in the OCI Container Engine for Kubernetes Cluster (OKE). We have seen how we can access the administration console, how to set up load balancing and expose applications outside the OCI Container Engine for Kubernetes Cluster, and how to set up , install, and run Prometheus, Grafana, and WebLogic domains, and use Grafana dashboards to visualize the WebLogic Server runtime metrics. You can add to the Grafana dashboards provided in this sample by creating your own dashboards to display different metrics you want to monitor.  

In previous blogs, we described how to run WLS on Kubernetes with the Operator using the Oracle Cloud Infrastructure (OCI) Container Engine for Kubernetes and how to set up the Monitoring Exporter...

Oracle Enterprise Java Highlights at OpenWorld and Code One

We’re looking forward to Oracle OpenWorld and Oracle Code One to be held September 16-19 in San Francisco.   The Oracle Enterprise Java team is delivering or participating in many sessions, panel discussions, theatre presentations and hands-on labs in both conferences.   For those of you attending the conference, I’ve provided links below to specific session listings to help you navigate to events of greatest interest to you.  If you’re unable to attend, here are some of the highlights: We’ll be covering how to run Oracle WebLogic Server and Coherence applications using in public and private clouds, including: The latest updates on WebLogic Server Kubernetes tools, such as the Oracle WebLogic  Server Operator 2.3 just released, and the Istio support we’re providing. The Oracle Coherence Operator released at the end of July. Support for the CRI-O container runtime, in addition to Docker. Updates to our WebLogic Cloud offerings on the Oracle Cloud Marketplace. New Azure ARM templates for running Oracle WebLogic Server on Microsoft Azure IaaS.  Success stories from customers using these technologies, including CERN, DataScan, and Intris. We will also be covering Helidon and Helidon 1.3, just released, which includes MicroProfile 3.0 support.   We have sessions on the technology itself, building microservices with Helidon, Helidon support for GraalVM, and integrating Helidon and Oracle WebLogic Server applications on Kubernetes. In terms of upcoming releases, you may have have seen the Jakarta EE 8 announcement last week, and our intention to support Jakarta EE 8 in an upcoming release of Oracle WebLogic Server.   We will be announcing a customer beta program for Oracle WebLogic Server and Coherence 14.1.1, and inviting customers to request participation.   This release will provide Java EE 8 and Jakarta EE 8 support, as well as other features for cloud and microservices integration.  We will be delivering our final maintenance releases of Oracle WebLogic Server and Coherence 12.2.1.X very soon.  Finally, we will be discussing our strategy for enabling integration and management of traditional applications and microservices in hybrid cloud environments. Here are links to session guides on these topics.   Enjoy the shows! Session/Program Guides for OpenWorld Oracle WebLogic Server, Coherence and Helidon: http://bit.ly/oow19_WCH Oracle WebLogic Server: http://bit.ly/oow19_WLS Oracle Coherence: http://bit.ly/oow19_COH Helidon: http://bit.ly/oow19_Helidon Session/Program Guides for CodeOne Microservices: http://bit.ly/code19_microservices Jakarta EE: http://bit.ly/code19_JakartaEE Helidon: http://bit.ly/code19_Helidon Kubernetes: http://bit.ly/code19_Kubernetes   The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, timing, and pricing of any features or functionality described for Oracle’s products may change and remains at the sole discretion of Oracle Corporation.  

We’re looking forward to Oracle OpenWorld and Oracle Code One to be held September 16-19 in San Francisco.   The Oracle Enterprise Java team is delivering or participating in many sessions, panel...

The WebLogic Server

Oracle WebLogic Server on Microsoft Azure IaaS

We are pleased to announce another aspect of the partnership between Oracle and Microsoft.  In early June 2019,  Oracle and Microsoft announced their cloud interoperability partnership. We are now announcing another key piece in that story: Oracle WebLogic Server on Microsoft Azure IaaS. In addition to the exciting work on the WebLogic Kubernetes Operator and Coherence Kubernetes Operator, the WebLogic team at Oracle is hard at work creating several interoperating Azure ARM templates and corresponding Azure Marketplace Offers to cover the most common needs of deploying WebLogic Server to IaaS resources on Microsoft Azure. The following collection of Marketplace Offers are all based on Oracle WebLogic Server 12.2.1.3 running on Oracle Linux 7.6. Create a single VM with WebLogic Admin Only domain pre-configured Create a N-node WebLogic cluster with the admin server on one VM and cluster members on other VMs.  Admin server and all managed servers are started by default when the provisioning completes. Admin Server and NodeManager are started as systemd services and CrashRecoveryEnabled is set to true for the NodeManager so even after a VM reboot servers are restarted automatically.  Additional nodes can be added to the cluster using the Azure CLI. Create a N-node WebLogic cluster as in the preceding offer, but with an Azure LoadBalancer automatically configured for the cluster. Create a N-node WebLogic dynamic cluster with the admin server on one VM and managed servers in a Dynamic cluster on the other nodes. The WLS administrator can scale up from the admin console or WLST to start additional managed servers on the available VMs.  As in the preceding offer, the services are configured to auto-start, and more nodes can be added with the Azure CLI.   Please follow this blog for updates on the availability of these offers.  We expect to have them available in the Azure Marketplace by end of October 2019.  You can also visit our "Contact Me" offer in the Azure Marketplace to be notified of the availability of the offers.   For more information on the Oracle on Microsoft partnership, visit https://azure.microsoft.com/en-us/solutions/oracle/

We are pleased to announce another aspect of the partnership between Oracle and Microsoft.  In early June 2019,  Oracle and Microsoft announced their cloud interoperability partnership. We are now...

Configuring WebLogic Server JTA Automatic Service Migration with Dynamic Clusters Using WLST and REST

WebLogic Server JTA service migration allows for in-progress transactions that are coordinated by a clustered server that fails to be recovered and resolved by another cluster member.  This capability helps to resolve transactions quickly, potentially reducing the amount of time that resource manager locks are held that could impede other transactions. There are two types of clusters supported by WebLogic Server, configured and dynamic.  Configured clusters consist of pre-defined Server instances that reference a cluster configuration.  Dynamic clusters utilize a Server Template to create new cluster members at runtime in response to administrative commands, or performance-based rules, and is the default clustering model used in Kubernetes WebLogic Operator environments.  For additional details about WebLogic Server clustering, refer to “Administering Clusters on Oracle WebLogic Server“. JTA service migration can be configured to automatically fail over when a server terminates by setting the JTA Migratable Target - Migration Policy attribute on the clustered servers to either “failure-recovery” or “shutdown-recovery”.  By default, all servers in the cluster are considered candidates for migration of any server’s transaction recovery service.  For configured clusters, a server can restrict the set of cluster members that can process its recovery service by assigning a subset of the cluster members to the Constrained Candidate Servers attribute.     The WebLogic Server Administration Console exposes JTA service migration configuration attributes on the Server > Configuration > Migration page.  The following screenshot shows a portion of the Server Migration page for a configured cluster containing two Managed Servers (Server-1 and Server-2) where the Migration Policy attribute is set to Failure Recovery, and the Strict Ownership Check attribute is set to true. To configure JTA auto-migration in a dynamic cluster, it is necessary to set the JTA Migratable Target attributes on the Cluster Server Template.  However, in WebLogic Server versions 12.2.1.3.0 and earlier, the JTA Migratable Target fields are not exposed on the Administration Console’s Server Template > Configuration > Migration page (see the Administration Console screenshot below).  Due to the lack of Console support, an administration client or API, such as WLST or REST, is required to make the necessary configuration changes to the Server Template JTA Migratable Target.     There are a couple of additional points regarding JTA service migration configuration with dynamic clusters.  First, it is not possible to define a constrained list of candidate servers.  All members of the dynamic cluster are considered candidates for failover.  Finally, if the dynamic cluster’s Migration Basis attribute is set to “Database”, then the Data Source for Automatic Migration attribute must also be set.  This attribute is available on the Administration Console Cluster > Configuration > Migration page. For additional details on configuring JTA automatic service migration, refer to the WebLogic Server documentation section “Roadmap for Configuring Automatic Migration of the JTA Transaction Recovery Service“. Examples The following examples show how to set the JTA auto-migration fields on a cluster server template using WLST and REST endpoints.  The scripts operate on a domain that contains a dynamic cluster with a server template named “DynCluster-Template”.  Both examples set the JTA Migration Policy and Strict Ownership Check attributes to “failure-recovery” and “true” respectively.  Note that in both scripts, WebLogic Server administrative credentials need to be substituted for username and password, and the Administration Server’s address and port number should be specified in place of host:7001.  Also substitute the actual cluster server template name for DynCluster-Template. WLST connect("username","password","t3://host:7001") edit() startEdit() cd('/ServerTemplates/DynCluster-Template/JTAMigratableTarget/DynCluster-Template') cmo.setMigrationPolicy('failure-recovery') cmo.setStrictOwnershipCheck(true) activate() REST curl -v --user username:password -H X-Requested-By:MyClient \ -H Accept:application/json -H Content-Type:application/json \ -d "{}" -X POST "http://host:7001/management/weblogic/latest/edit/changeManager/startEdit" curl -v --user username:password -H X-Requested-By:MyClient \ -H Accept:application/json -H Content-Type:application/json \ -d "{ 'migrationPolicy': 'failure-recovery', 'strictOwnershipCheck': true }" \ -X POST "http://host:7001/management/weblogic/latest/edit/serverTemplates/DynCluster-Template/JTAMigratableTarget" curl -v --user username: password -H X-Requested-By:MyClient \ -H Accept:application/json -H Content-Type:application/json \ -d "{}" -X POST "http://host:7001/management/weblogic/latest/edit/changeManager/activate" Resulting Configuration Each of the example scripts will modify the domain configuration such that the Server Template JTA Migratable Target element contains the updated Migration Policy and Strict Ownership Check attributes.   <server-template>     <name>DynCluster-Template</name>     <ssl>       <listen-port>8100</listen-port>     </ssl>     <machine xsi:nil="true"></machine>     <listen-port>7100</listen-port>     <cluster>DynCluster</cluster>     <web-server>       <web-server-log>         <number-of-files-limited>false</number-of-files-limited>       </web-server-log>     </web-server>     <administration-port>9002</administration-port>     <jta-migratable-target>       <cluster>DynCluster</cluster>       <migration-policy>failure-recovery</migration-policy>       <strict-ownership-check>true</strict-ownership-check>     </jta-migratable-target>   </server-template> Summary This article showed how to enable JTA automatic service migration in a dynamic cluster by using WLST and REST APIs to update the necessary Server Template configuration attributes.   JTA automatic service migration with dynamic clusters provides the ability for applications to scale dynamically based on runtime metrics while potentially improving throughput and reducing the risk of data loss in the event of server failure.

WebLogic Server JTA service migration allows for in-progress transactions that are coordinated by a clustered server that fails to be recovered and resolved by another cluster member.  This capability...

Announcement

Automate WebLogic image building and patching!

We are pleased to announce the release of the WebLogic Image Tool. The WebLogic Image Tool is an open source tool that allows you to automate building, patching, and updating your WebLogic Server Docker images, including your own customized images.  This tool can be scripted and used in CI/CD processes. Find the WebLogic Image Tool GitHub project at https://github.com/oracle/weblogic-image-tool.  There are three major use cases for this tool: Create a customized WebLogic Server Docker image where the user can choose: The OS base image (e.g. Oracle Linux 7.5). The version of Java (e.g. 8u202). The version of WebLogic Server or Fusion Middleware Infrastructure (FMW Infrastructure) installer (e.g. 12.1.3, 12.2.1.3). A specific Patch Set Update (PSU). One or more interim or “one-off” patches. Patch a base install image of WebLogic or FMW Infrastructure. Patch and build a domain image of WebLogic or FMW Infrastructure using a WebLogic Deploy Tool model. Using the Image tool, you can incorporate the use cases above into automated processes for patching all of your WebLogic infrastructure and applications running in Docker and Kubernetes. The Image Tool leverages an important new capability built into My Oracle Support (MOS) that provides a REST API for specifying and downloading patches.  The Image Tool automatically downloads all the one-off patches and PSUs you specify from My Oracle Support (MOS) using this REST API, including updates to OPatch, if required. The tool checks for patch conflicts invoking MOS APIs.  You must provide the MOS credentials with the necessary support entitlements, and manually download the WebLogic or Java installers before invoking the tool.  Patches and installers are cached to prevent having to download them multiple times. The Image Tool follows Docker best practices to automatically build an image with the recommended image layering, following best practices, and performing cleanup to minimize image size. The tool ensures that the image remains patchable and only uses standard and publicly documented Oracle tools and APIs.  We have posted a YouTube video which demonstrates using the WebLogic Image Tool to create a customized WebLogic Server install image. If you are interested in learning about how to use these images to automate your CI/CD processes to deploy WebLogic domains in Kubernetes, please refer to our documentation and a demonstration YouTube video using Jenkins.  Our future plans include new features and enhancements to the tool over time. Please stay tuned for more information. We hope this announcement is helpful to those of you seeking to patch, update, and deploy WebLogic Server Docker images, and look forward to your feedback.

We are pleased to announce the release of the WebLogic Image Tool. The WebLogic Image Tool is an open source tool that allows you to automate building, patching, and updating your WebLogic Server...

Announcement

WebLogic Operator 2.2 Support for ADF Applications

We are excited to announce the release of version 2.2.0 of the WebLogic Server Kubernetes Operator. The operator provides support for Fusion Middleware Infrastructure domains running on Kubernetes. This means that WebLogic Server applications using Oracle’s Application Development Framework (ADF) can now be deployed and managed in Kubernetes using the WebLogic Operator. The Operator uses a common set of Kubernetes APIs to provide support for operations such as: provisioning, life cycle management, application versioning, product patching, scaling, and security.  This version of the operator manages Fusion Middleware (FMW) Infrastructure domains running on Kubernetes.  It adds support for FMW Infrastructure domain configurations that are persisted to a Persistent Volume (PV/PVC), and enabling the use of tooling that is native to this infrastructure for monitoring, logging, tracing, security, and such.  The operator source code, samples, and documentation can be found in the GitHub project repository, and the operator image is available to be pulled from Docker Hub. In this version of the Kubernetes Operator, we have added the following functionality and support: An FMW Infrastructure 12.2.1.3 install Docker image which is supported to run in production.  Samples in our Docker GitHub project for creating an FMW Infrastructure domain, and for patching the install image. Documentation and samples in our weblogic-kubernetes-operator GitHub project that provide details about: How to pull the pre-built FMW Infrastructure Docker install image from the Oracle Container Registry.  This image already has patch  29135930 applied, which is required to manage the domain with the operator. How to run the Repository Creation Utility (RCU) to create database schema, including when the database is internal to the Kubernetes cluster and when the database is external to the Kubernetes cluster. An end-to-end sample to deploy an FMW Infrastructure domain in Kubernetes managed by the operator. Our future plans include enhancements to the WebLogic Monitoring Exporter and Logging Exporter to support FMW Infrastructure domains. In addition, by providing support for Fusion Middleware Infrastructure domains, and ADF applications, we have begun to deliver support for Fusion Middleware products running on Kubernetes using the WebLogic Operator. We hope this announcement is helpful to those of you seeking to deploy FMW Infrastructure applications on Kubernetes, and look forward to your feedback.  

We are excited to announce the release of version 2.2.0 of the WebLogic Server Kubernetes Operator. The operator provides support for Fusion Middleware Infrastructure domains running on...

The WebLogic Server

Oracle Driver Types in the WebLogic Console

In the early days, there was only a single URL format supported for the Oracle thin driver and it could be used with either the XA or non-XA driver class for a GENERIC datasource.  Over the years, more URL formats and driver classes have been supported so now there are nine options listed in the dropdown box for the Oracle driver when creating a datasource in the console (note that the DataDirect Driver is no longer shipped with the product).   The following is a description of the nine supported options based on the console input, a sample of the URL that is generated from the input, and the driver that is configured. 1. Database Driver Type in Console: Oracle’s Driver (Thin XA) for Application Continuity; Versions: Any URL Input: Database is used as service name (the “Database” header is really a misnomer), host, port Sample Generated URL: jdbc:oracle:thin:@//host:1521/service Driver:  oracle.jdbc.replay.OracleXADataSourceImpl   (XA replay driver) 2. Database Driver Type in Console: Oracle’s Driver (Thin XA) for Instance connections; Versions: Any URL Input: Database is used as SID, host, port.  The use of SID is deprecated.  You should stop using this format and instead use the service name. Sample Generated URL: jdbc:oracle:thin:@host:1521:SID Driver: oracle.jdbc.xa.client.OracleXADataSource 3. Database Driver Type in Console: Oracle’s Driver (Thin XA) for Service-Instance connections; Versions: Any URL Input: Database is used as instance name (the header “Database” is really a misnomer), service name, host, port.  This format is used when the service is available on multiple instances and the URL should map to a single instance for GENERIC and Multi Datasource.  A long format URL is generated so that the instance name can be specified. Sample Generated URL: jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=host)(PORT=1521)))     (CONNECT_DATA=(SERVICE_NAME=service)(INSTANCE_NAME=instance))) Driver:   oracle.jdbc.xa.client.OracleXADataSource 4. Database Driver Type in Console: Oracle’s Driver (Thin XA) for Service connections; Versions: Any URL Input: Database is used as service, host, port.  Note that a slash precedes the service name (a colon precedes the SID). Sample Generated URL: jdbc:oracle:thin:@host:1521/service Driver:  oracle.jdbc.xa.client.OracleXADataSource 5. Database Driver Type in Console: Oracle’s Driver (Thin) for Application Continuity; Versions: Any URL Input: Database is used as service, host, port Sample Generated URL:  jdbc:oracle:thin:@//host:1521/service Driver:  oracle.jdbc.replay.OracleDataSourceImpl 6. Database Driver Type in Console: Oracle’s Driver (Thin) for Instance connections; Versions: Any URL Input: Database is used as SID, host, port.  The use of SID is deprecated.  You should stop using this format and instead use the service name. Sample Generated URL:  jdbc:oracle:thin:@host:1521:SID       Driver:  oracle.jdbc.OracleDriver 7. Database Driver Type in Console: Oracle’s Driver (Thin) for Service connections; Versions: Any URL Input: Database is used as service, host, port.  This is the default and most popular format for GENERIC datasources.  The service should be available on a single instance for GENERIC and Multi Datasource. Sample Generated URL:  jdbc:oracle:thin:@host:1521/service  Driver:  oracle.jdbc.OracleDriver 8. Database Driver Type in Console: Oracle’s Driver (Thin) for Service-Instance connections; Versions: Any URL Input: Database is used as instance, service, host, port Sample Generated URL:  jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=host)(PORT=1521)))     (CONNECT_DATA=(SERVICE_NAME=service)(INSTANCE_NAME=instance))) Driver:  oracle.jdbc.OracleDriver 9. Database Driver Type in Console: Oracle’s Driver (Thin) for pooled instance connections; Versions: Any URL Input: Database is used as SID, host, port.  This format is used to get a pooled datasource and is not commonly used. Sample Generated URL:  jdbc:oracle:thin:@host:1521:SID       Driver:  oracle.jdbc.pool.OracleDataSource The creation form to input the URL information looks like the following. There are lots of variations on the URL.  Further, there are some newer formats that are not automatically generated by the console.  For those formats, just pick any of the supported formats and enter throw-away values.  The URL can be manually updated on a subsequent form before testing the datasource. Two of the more popular formats are the long format and the alias format. The long format can be used to specify options like retry delay and count, as in the following sample. jdbc:oracle:thin:@ (DESCRIPTION=(CONNECT_TIMEOUT=120)(RETRY_COUNT=20)(RETRY_DELAY=3)(TRANSPORT_CONNECT_TIMEOUT=3)    (ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=host)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=service))) The alias format can be used to reference an alias in the tnsnames.ora file, as in the following sample.  The oracle.net.tns_admin driver property must be used to specify the directory location for this file. jdbc:oracle:thin:@alias The long format is generally recommended because it uses the service name instead of the SID and provides access to new features in later versions of the driver.  The long format can optionally be stored in the tnsnames.ora file by using the alias format in the datasource definition.      

In the early days, there was only a single URL format supported for the Oracle thin driver and it could be used with either the XA or non-XA driver class for a GENERIC datasource.  Over the years,...

Announcement

WebLogic Server is now certified to run on OpenShift!

We are pleased to announce the certification of WebLogic Server on the Red Hat OpenShift which is based on Kubernetes.  The WebLogic domain runs on OpenShift managed by the WebLogic Kubernetes Operator. The Operator uses a common set of Kubernetes APIs to provide an improved user experience when automating operations such as: provisioning, lifecycle management, application versioning, product patching, scaling, and security.  We verified the following functionality: Installation of the WebLogic Kubernetes Operator. Displaying Operator Logs in Kibana. Running a WebLogic domain where the domain configuration is in a Docker image or on a Persistent Volume. Accessing the WebLogic Administration Console and WLST. Deploying an application to a WebLogic cluster. Routing and exposing the application outside of OpenShift. Scaling of a WebLogic cluster. Load balancing requests to the application. Exposing WebLogic metrics to Prometheus using the  WebLogic Monitoring Exporter. Controlling lifecycle management of the WebLogic domain/cluster/server. Initiating a rolling restart of the WebLogic domain. Changing  domain configuration using Configuration Overrides.   The following matrix shows the different versions of products used in our certification: Product Version WebLogic Server 12.2.1.3+ WebLogic Kubernetes Operator 2.0.1+ OpenShift 3.11.43+ Kubernetes 1.11.0+ Docker 1.13.1ce+  On January 19th we published a  blog “WebLogic on OpenShift” which describes the steps to run a WebLogic domain/cluster managed by the operator running on  OpenShift. The starting point is the OpenShift Container Platform server set up on OCI in this earlier post.  To run WebLogic on OpenShift  get the operator 2.0.1 Docker image from the Dock­­­­­­­­­er Hub, clone­­­­ the GitHub project, and follow the sample in the blog. Our goal is to support WebLogic Server on all Kubernetes platforms on premises and in both private and public clouds, providing the maximum support to migrate WebLogic workloads to cloud neutral infrastructures.  We continue to expand our certifications on different Kubernetes platforms; stay tuned – thanks! Safe Harbor Statement The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions.  The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.

We are pleased to announce the certification of WebLogic Server on the Red Hat OpenShift which is based on Kubernetes.  The WebLogic domain runs on OpenShift managed by the WebLogic Kubernetes...

The WebLogic Server

Oracle 18.3 Database Support with WebLogic Server

The Oracle 18.3 database is available and works with WebLogic Server (WLS) . Using Older Drivers with the 18.3 Database Server The simplest integration of WebLogic Server with an Oracle 18.3 database is to use the Oracle driver jar files included in your WebLogic Server installation. There are no known problems or upgrade issues when using 11.2.x or 12.x drivers shiped with WLS when interoperating with an Oracle 18.3 database.  See the Oracle JDBC FAQ for more information on driver support and features of Oracle 18.3 database. Using the Oracle 18.3 Drivers with the 18.3 Database Server To use many of the new 18.3 Oracle database features, it is necessary to use the 18.3 Oracle database driver jar files.  WLS 12.2.1.4.0 ships with the 19.3 Oracle driver so a driver upgrade is not required.  That is the recommended approach to get post 12.2 Oracle driver jar files. Note that Oracle 18.3 database driver jar files are compiled for JDK 8. The earliest release of WLS that supports JDK 8 is WLS 12.1.3. The Oracle 18.3 database driver jar files cannot work with earlier versions of WLS.  In earlier versions of WLS you can use the drivers that come with the WLS installation to connect to the 18.3 DB, as explained above.  At this time, this article does not apply to Fusion MiddleWare (FMW) deployments of WLS.. Required Oracle 18.3 Driver Files This section lists the files required to use an Oracle 18.3 driver with pre-12.2.1.4.0 releases of WebLogic Server . Note: These jar files must be added to the CLASSPATH used for running WebLogic Server at the head of the CLASSPATH. They must come before all of the 11.2.x or 12.x Oracle database client jar files. Select one of the following ojdbc files (note that these have "8" in the name instead of "7" from the earlier release)   The _g jar files are using for debugging and required if you want to enable driver level logging.  If you are using FMW, you must use the "dms" version of the jar file.  WLS uses the non-"dms" version of the jar by default. ojdbc8-full/ojdbc8.jar ojdbc8-diag/ojdbc8_g.jar ojdbc8-diag/ojdbc8dms.jar ojdbc8-diag/ojdbc8dms_g.jar The following table lists additional required driver files:   File Description ojdbc8-full/simplefan.jar Fast Application Notification ojdbc8-full/ucp.jar Universal Connection Pool ojdbc8-full/ons.jar Oracle Network Server client ojdbc8-full/orai18n.jar Internationalization support ojdbc8-full/oraclepki.jar Oracle Wallet support ojdbc8-full/osdt_cert.jar Oracle Wallet support ojdbc8-full/osdt_core.jar Oracle Wallet support ojdbc8-full/xdb6.jar SQLXML support   Download Oracle 18.3 Database Files You can download the required jar files from https://www.oracle.com/technetwork/database/application-development/jdbc/downloads/jdbc-ucp-183-5013470.html.  The ojdbc8-full jar files are contained in ojdbc8-full.tar.gz and the ojdbc8-diag files are contained in ojdbc8-diag.tar.gz.  It is recommended to unpackage both of these files under a single directory, maintaining the directory structure (e.g., if the directory is /share, you would end up with /share/ojdbc8-full and /share/ojdbc8-diag directories). Note: In earlier documents, instructions included installation of aqjms.jar to run with AQJMS and xmlparserv2_sans_jaxp_services.jar, orai18n-collation.jar, and orai18n-mapping.jar for XML processing,  These jar files are not available in the Oracle Database 18c (18.3) JDBC Driver & UCP Downloads.  If you need one of these jar files, you will need to install the Oracle Database client, the Administrator package client installation, or a full database installation to get the jar files and add them to the CLASSPATH.. Update the WebLogic Server CLASSPATH or PRE_CLASSPATH To use an Oracle 18.3 JDBC driver, you must update the CLASSPATH in your WebLogic Server environment. Prepend the required files specified in Required Oracle 18.3 Driver Files listed above to the CLASSPATH (before the 12.x Driver jar files).  If you are using startWebLogic.sh, you also need to set the PRE_CLASSPATH. The following code sample outlines a simple shell script that updates the CLASSPATH of your WebLogic environment. Make sure ORACLE183 is set appropriately to the directory where the files were unpackaged (e.g., /share in the example above).. #!/bin/sh # source this file in to add the new 18.3 jar files at the beginning of the CLASSPATH case "`uname`" in *CYGWIN*) SEP=";" ;; Windows_NT) SEP=";" ;; *) SEP=":" ;; esac dir=${ORACLE183:?} # We need one of the following #ojdbc8-full/ojdbc8.jar #ojdbc8-diag/lib/ojdbc8_g.jar #ojdbc8-diag/lib/ojdbc8dms.jar #ojdbc8-diag/lib/ojdbc8dms_g.jar if [ "$1" = "" ] then ojdbc=ojdbc8.jar else ojdbc="$1" fi case "$ojdbc" in ojdbc8.jar) ojdbc=ojdbc8-full/$ojdbc ;; ojdbc8_g.jar|ojdbc8dms.jar|ojdbc8dms_g.jar) ojdbc=ojdbc8-diag/$ojdbc ;; *) echo "Invalid argument - must be ojdbc8.jar|ojdbc8_g.jar|ojdbc8dms.jar|ojdbc8dms_g.jar" exit 1 ;; esac CLASSPATH="${dir}/${ojdbc}${SEP}$CLASSPATH" CLASSPATH="${dir}/ojdbc8-full/simplefan.jar${SEP}$CLASSPATH" CLASSPATH="${dir}/ojdbc8-full/ucp.jar${SEP}$CLASSPATH" CLASSPATH="${dir}/ojdbc8-full/ons.jar${SEP}$CLASSPATH" CLASSPATH="${dir}/ojdbc8-full/orai18n.jar${SEP}$CLASSPATH" CLASSPATH="${dir}/ojdbc8-full/oraclepki.jar ${SEP}$CLASSPATH" CLASSPATH="${dir}/ojdbc8-full/osdt_cert.jar ${SEP}$CLASSPATH" CLASSPATH="${dir}/ojdbc8-full/osdt_core.jar${SEP}$CLASSPATH" CLASSPATH="${dir}/ojdbc8-full/xdb6.jar${SEP}$CLASSPATH" For example, save this script in your environment with the name setdb183_jars.sh. Then run script with ojdbc8.jar: . ./setdb183_jars.sh ojdbc8.jar export PRE_CLASSPATH="$CLASSPATH" 

The Oracle 18.3 database is available and works with WebLogic Server (WLS) . Using Older Drivers with the 18.3 Database Server The simplest integration of WebLogic Server with an Oracle 18.3 database is...

The WebLogic Server

Easily Create an OCI Container Engine for Kubernetes cluster with Terraform Installer to run WebLogic Server

In previous blogs, we have described how to run WebLogic Server on Kubernetes with the Operator using the Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE). To create new Kubernetes clusters quickly, we suggest that you use the Terraform-based Kubernetes installation for the Oracle Cloud Infrastructure (OCI). It consists of Terraform modules and an example base configuration that is used to provision and configure the resources needed to run a highly available and configurable Kubernetes cluster on the Oracle Cloud Infrastructure (OCI). In this blog, we provide sample Terraform scripts and describe the steps to create a basic OKE cluster. Note that this cluster can be used for testing and development purposes only. The provided samples of Terraform scripts should not be considered for creating production clusters, without more of a review in production clusters. Using Terraform installation scripts makes the provisioning of the cloud infrastructure and any required local resources for the Kubernetes cluster fast and easy to perform. They enable you to run WebLogic Server on OKE and leverage WebLogic Server applications in a managed Kubernetes environment in no time. The samples will create: A new Virtual Cloud Network (VCN) for the cluster Two load balancer subnets with security lists Three worker subnets with security lists A Kubernetes cluster with one Node Pool A kubeconfig file to allow access using kubectl Nodes and network settings will be configured to allow SSH access, and the cluster networking policies will allow NodePort services to be exposed. All OCI Container Engine masters are highly available (HA) and employ load balancers. Prerequisites To use these Terraform scripts, you will need to: Have an existing tenancy with sufficient compute and networking resources available for the desired cluster. Have an Identity and Access Management policy in place within that tenancy to allow the OKE service to manage tenancy resources. Have a user defined within that tenancy. Have an API key defined for use with the OCI API, as documented here. Have an SSH key pair for configuring SSH access to the nodes in the cluster. Configuration Files The following configuration files are part of this Terraform plan.  File Description   provider.tf Configures the OCI provider for a single region. vcn.tf Configures the VCN for the cluster, including two load balancer subnets and three worker subnets. cluster.tf Configures the cluster and worker node pool. kube_config.tf Downloads the kubeconfig for the new cluster. template.tfvars Example variable definition file for creating a cluster.   Creating a Cluster Without getting into the configuration details, getting a simple cluster running quickly entails the following: Create a new tfvars file based on the values from the provided oci.props file. Apply the configuration using Terraform. In the sample, we have provided a script that performs all the steps. In addition, the script downloads and installs all the required binaries for Terraform, Terraform OCI Provider, based on OS system (macOS or Linux). Create a Variables File This step involves creating a variables file that provides values for the tenancy that will contain the VCN and cluster you're creating. In the sample, the script oke.create.sh uses values from the property file oci.props. Copy the oci.props.template to the oci.props file and enter the values for all the properties in the oci.prop file:  1.user.ocid - Log in to the OCI console. Click the user icon in the upper-right corner and select user settings. It will display this page: Copy the OCID information and enter it as a value of the user.ocid property. tfvars.filename – Name of the tfvar file that the script will generate for the Terraform configuration (no file extension ) okeclustername – Name of the generated OCI Container Engine for Kubernetes cluster. tenancy.ocid–– In the OCI console, click the user icon in the upper-right corner and select tenancy. Copy the tenancy OCID information and enter it as a value of the tenancy.ocid property.       5. region – Name of the used home region for tenancy​           6. compartment.ocid –– In the OCI console, in the upper-left corner, select ‘Menu’, then ‘Identity’, and then ‘Compartments’. On the ‘Compartments’ page, select the desired compartment, copy the OCID information, and enter it as a value of the compartment.ocid property. compartment.name – Enter the name of the targeted compartment. ociapi.pubkey.fingerprint – During your access to OCI setup, you have generated OCI user public and private keys. Obtain it from the API Keys section of the User Settings page in the OCI Console. Add escape backslash ‘\’ for each colon sign. In our example: c8\:c2\:la\:a2\:e8\:96\:7e\:bf\:ac\:ee\:ce\:bc\:a8\:7f\:07\:c5 ocipk.path – Full path to OCI API private key vcn.cidr.prefix – Check your compartment and use a unique number for the prefix. vcn.cidr - Full CIDR for the VCN, must be unique within the compartment. nodepool.shape – In the OCI console, select  ‘Menu’, then ‘Governance’, and then ‘Service Limits’. On the ‘Service Limits’ page, go to ‘Compute’ and select an available Node Pool shape: k8s.version – OCI Container Engine for Kubernetes supported version for Kubernetes. To check the supported values, select ‘Menu’, then ‘Developer Services’, and then ‘Container Clusters’. Select the Create Cluster button. Check the version: nodepool.imagename – Select supported Node Pool Image  nodepool.ssh.pubkey – Copy and paste the content of your generated SSH public key. This is the key you would use to SSH into one of the nodes. terraform.installdir – Location to install Terraform binaries. The provided samples/terraform/oke.create.sh script will download all the needed artifacts. The following table lists all the properties, descriptions, and examples of their values. Variable Description Example user.ocid   OCID for the tenancy user – can be obtained from the user settings in the OCI console ocid1.user.oc1..aaaaaaaas5vt7s6jdho6mh2dqvyqcychofaiv5lhztkx7u5jlr5wwuhhm  tfvars.filename File name for generated tfvar file myokeclustertf region The name of region for tenancy us-phoenix-1 okeclustername The name for OCI Container Engine for Kubernetes cluster myokecluster tenancy.ocid OCID for the target tenancy ocid1.tenancy.oc1..aaaaaaaahmcbb5mp2h6toh4vj7ax526xtmihrneoumyat557rvlolsx63i compartment.ocid OCID for the target compartment ocid1.compartment.oc1..aaaaaaaaxzwkinzejhkncuvfy67pmb6wb46ifrixtuikkrgnnrp4wswsu compartment.name  Name for the target compartment QualityAssurance ociapi.pubkey.fingerprint Fingerprint of the OCI user's public key   c8\:c2\:da\:a2\:e8\:96\:7e\:bf\:ac\:ee\:ce\:bc\:a8\:7f\:07\:c5 ocipk.path API Private Key -- local path to the private key for the API key pair /scratch/mkogan/.oci/oci_api_key.pem vcn.cidr.prefix Prefix for VCN CIDR, used when creating subnets -- you should examine the target compartment find a CIDR that is available 10.1 vcn.cidr Full CIDR for the VCN, must be unique within the compartment (first 2 octets should match the vcn_cidr_prefix ) 10.1.0.0/16 nodepool.shape A valid OCI VM Shape for the cluster nodes VM.Standard2.1 k8s.version OCI Container Engine for Kubernetes supported Kubernetes version string v1.11.5   nodepool.imagename OCI Container Engine for Kubernetes supported Node Pool Image Oracle-Linux-7.4 nodepool.ssh.pubkey SSH public key (key contents as a string) to use to SSH into one of the nodes. ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9FSfGdjjL+EZre2p5yLTAgtLsnp49AUVX1yY9V8guaXHol6UkvJWnyFHhL7s0qvWj2M2BYo6WAROVc0/054UFtmbd9zb2oZtGVk82VbT6aS74cMlqlY91H/rt9/t51Om9Sp5AvbJEzN0mkI4ndeG/5p12AUyg9m5XOdkgI2n4J8KFnDAI33YSGjxXb7UrkWSGl6XZBGUdeaExo3t2Ow8Kpl9T0Tq19qI+IncOecsCFj1tbM5voD8IWE2l0SW7V6oIqFJDMecq4IZusXdO+bPc+TKak7g82RUZd8PARpvYB5/7EOfVadxsXGRirGAKPjlXDuhwJYVRj1+IjZ+5Suxz mkog@slc13kef terraform.installdir Location to install Terraform binaries /scratch/mkogan/myterraform Save the oci.props file in the samples/scripts/terraform directory. See the provided template as an example. Execute the oke.create.sh script in the [weblogic-kubernetes-operatorDir]/kubernetes/samples/scripts/terraform: sh oke.create.sh This command will: Generate the Terraform tfvar configuration file. Download Terraform, Terraform OCI Provider binaries. Execute Terraform ‘init’, ‘apply’ commands to create OCI Container Engine for Kubernetes cluster. Generate ${okeclustername}_kubeconfig file, in our example myokecluster_kubeconfig. Wait about 5-10 mins for the OCI Container Engine for Kubernetes Cluster creation to complete. Execute this command to switch to the created OCI Container Engine for Kubernetes cluster configuration: export KUBECONFIG=[fullpath]/myokecluster_kubeconfig  Check the nodes IPs and status by executing this command: kubectl get nodes bash-4.2$ kubectl get nodes NAME             STATUS    ROLES    AGE       VERSION 129.146.56.254   Ready     node     25d       v1.10.11 129.146.64.74    Ready     node     25d       v1.10.11 129.146.8.145    Ready     node     25d       v1.10.11 You can also check the status of the cluster in the OCI console. In the console, select ‘Menu’, then Developer Services’, then ’Container Clusters (OKE). Your newly created OCI Container Engine for Kubernetes cluster (OKE) is ready to use!   Summary In this blog, we demonstrated all the required steps to set up an OCI Container Engine for Kubernetes cluster quickly by using the provided samples of Terraform scripts. Now you can create and run WebLogic Server on Kubernetes in an OCI Container Engine for Kubernetes. See our Quick Start Guide  to quickly get the operator up and running or refer to the User Guide for detailed information on how to run the operator, how to create one or more WebLogic domains in Kubernetes, how to scale up or down a WebLogic cluster manually or automatically using the WebLogic Diagnostics Framework (WLDF) or Prometheus, how the operator manages load balancing for web applications deployed in WebLogic clusters, and how to provide integration for managing operator logs through Elasticsearch, Logstash, and Kibana.    

In previous blogs, we have described how to run WebLogic Server on Kubernetes with the Operator using the Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE). To create new Kubernetes...

The WebLogic Server

Updated WebLogic Kubernetes Support with Operator 2.0

We are excited to announce the release of version 2.0 of the WebLogic Server Kubernetes Operator. The operator uses a common set of Kubernetes APIs to provide an improved user experience when automating operations such as: provisioning, lifecycle management, application versioning, product patching, scaling, and security.  This version of the operator evolves WebLogic to run more natively in cloud neutral infrastructures.  It adds support for WebLogic domain configurations that are included in the Docker images, making these images portable across environments and improving support for CI/CD deployments.  The operator is developed as an open source project fully supported by Oracle. The project can be found in our GitHub repository, and the images are available to be pulled from Docker Hub. In this version of the WebLogic Server Kubernetes Operator, we have added the following functionality and support for: Kubernetes versions 1.10.11+, 1.111.5+, and 1.12.3+. A Helm chart to install the operator. Creating a WebLogic domain in a Docker image.  We have developed samples in our Docker GitHub project for creating these images with the WebLogic Deployment Tooling (WDT) or with WebLogic Scripting Tool (WLST).  Samples for deploying these images with the operator can be found in the GitHub project. Creating WebLogic domains in a Kubernetes persistent volume or persistent volume claims (PV/PVC). This persistent volume can reside in an NFS file system or other Kubernetes volume types. See our samples to create PV or PCV and to deploy the WebLogic domain in the persistent volume. When the WebLogic domain, application binaries, and application configuration are inside of a Docker image, this configuration is immutable.  We offer configuration overrides for certain aspects of the WebLogic domain configuration to maintain the portability of these images between different environments. The Apache HTTP Server, Traefik, and Voyager (HAProxy-backed) Ingress controller running within the Kubernetes cluster for load balancing HTTP requests across WebLogic Server Managed Servers running in clustered configurations. Unlike previous versions of the operator, operator 2.0 no longer deploys load balancers. We provide Helm charts to deploy these load balancers (see the samples listed below): Sample Traefik Helm chart for setting up a Traefik load balancer for WebLogic clusters. Sample Voyager Helm chart for setting up a Voyager load balancer for WebLogic clusters. Sample Ingress Helm chart for setting up a Kubernetes Ingress for each WebLogic cluster using a Traefik or Voyager load balancer. Sample Apache HTTP Server Helm chart and Apache samples using the default or custom configurations for setting up a load balancer for WebLogic clusters using the Apache HTTP Server with WebLogic Server Plugins. User-initiated lifecycle operations for WebLogic domains, clusters, and servers, including rolling restart.  See the  details in Starting, stopping, and restarting servers. Managing WebLogic configured and dynamic clusters. Scaling WebLogic domains by starting and stopping Managed Servers on demand, or by integrating with a REST API to initiate scaling based on the WebLogic Diagnostic Framework (WLDF), Prometheus, Grafana, or other rules.  If you want to learn more about scaling with Prometheus, read the blog  Using Prometheus to Automatically Scale WebLogic Clusters on Kubernetes.  Also, see this blog which walks you through a sample of scaling with WLDF, WebLogic Dynamic Clusters on Kubernetes. Exposing T3 channels outside the Kubernetes domain, if desired. Exposing HTTP paths on a WebLogic domain outside the Kubernetes domain with load balancing.  Updating the load balancer when a Managed Server is added or removed from a cluster during scaling up or shrinking actions. Publishing operator and WebLogic Server logs into Elasticsearch and interacting with them in Kibana.  See our documentation, Configuring Kibana and Elasticsearch. Our future plans include formal certification of WebLogic Server on Open Shift.  If you are interested in deploying a WebLogic domain and operator 2.0 in Open Shift, read the blog, Running WebLogic on Open Shift.  We are building and open-sourcing a new tool, the WebLogic Logging Exporter, to export WebLogic Server logs directly to the Elastic Stack.  Also, we are publishing blogs that describe how to take advantage of the new functionality in operator 2.0. Please stay tuned for more information. The fastest way to experience the operator is to follow the Quick Start guide.  We hope this announcement is helpful to those of you seeking to deploy WebLogic Server on Kubernetes, and look forward to your feedback.  

We are excited to announce the release of version 2.0 of the WebLogic Server Kubernetes Operator. The operator uses a common set of Kubernetes APIs to provide an improved user experience...

The WebLogic Server

Running WebLogic on OpenShift

In this post I am going to walk through setting up and using WebLogic on OpenShift, using the Oracle WebLogic Kubernetes Operator. My starting point is the OpenShift Container Platform server that I set up on OCI in this earlier post. I am going to use the operator to manage my domains in OpenShift. The operator pattern is common in Kubernetes for managing complex software products that have special lifecycle requirements, different to the base assumptions made by Kubernetes. For example, when there is state in a pod that needs to be saved or migrated before terminating a pod. The WebLogic Kubernetes operator includes such built-in knowledge of WebLogic, so it greatly simplifies the management of WebLogic in a Kubernetes environment. Plus it is completely open source and supported by Oracle. Overview Here is an overview of the process I am going to walk through: Create a new project (namespace) where I will be deploying WebLogic, Prepare the project for the WebLogic Kubernetes Operator, Install the operator, View the operator logs in Kibana, Prepare Docker images to run my domain, Create the WebLogic domain, Verify access to the WebLogic administration console and WLST, Deploy a test application into the cluster, Set up a route to expose the application publicly, Test scaling and load balancing, and Install the WebLogic Exporter to get metrics into Prometheus. Before we get started, you should clone the WebLogic operator project from GitHub. It contains many of the samples and helpers we will need. git clone https://github.com/oracle/weblogic-kubernetes-operator Create a new project (namespace) In the OpenShift web user interface, create a new project. If you already have other projects, go to the Application Console, and then click on the project pulldown at the top and click on “View All Projects” and then the “Create Project” button. If you don’t have existing projects, OpenShift will take you right to the create project page when you log in. I called my project “weblogic” as you can see in the image below: Creating a new project Then navigate into your project view. Right now it will be empty, as shown below: The new “weblogic” project Prepare the project for the WebLogic Kubernetes Operator The easiest way to get the operator Docker image is to just pull it from the Docker Hub. You can review details of the image in the Docker Hub. The WebLogic Kubernetes Operator in the Docker Hub You can use the following command to pull the image. You may need to docker login first if you have not previously done so: docker pull oracle/weblogic-kubernetes-operator:2.0 Instead of pulling the image and manually copying it onto our OpenShift nodes, we could also just add an Image Pull Secret to our project (namespace) so that OpenShift will be able to pull the image for us. We can do this with the following commands (at this stage we are using a user with the cluster-admin role): oc project weblogic oc create secret docker-registry docker-store-secret \ --docker-server=store.docker.com \ --docker-username=DOCKER_USER \ --docker-password=DOCKER_PASSWORD \ --docker-email=DOCKER_EMAIL In this command, replace DOCKER_USER with your Docker store userid, DOCKER_PASSWORD with your password, and DOCKER_EMAIL with the email address associated with your Docker Hub account. We also need to tell OpenShift to link this secret to our service account. Assuming we want to use the default service account in our weblogic project (namespace), we can run this command: oc secrets link default docker-store-secret --for=pull (Optional) Build the image yourself It is also possible to build the image yourself, rather than pulling it from Docker Hub. If you want to do that, first go to Docker Hub and accept the license for the Server JRE image, ensure you have the listed prerequisites installed, and then run these commands: mvn clean install docker build -t weblogic-kubernetes-operator:2.0 --build-arg VERSION=2.0 . Install the Elastic stack The operator can optionally send its logs to Elasticsearch and Kibana. This provides a nice way to view the logs, and to search and filter and so on, so let’s install this too. A sample YAML file is provided in the project to install them: kubernetes/samples/scripts/elasticsearch-and-kibana/elasticsearch_and_kibana.yaml Edit this file to set the namespace to “weblogic” for each deployment and service (i.e. on lines 30, 55, 74 and 98 - just search for namespace) and then install them using this command: oc apply -f kubernetes/samples/scripts/elasticsearch-and-kibana/elasticsearch_and_kibana.yaml After a few moments, you should see the pods running in our namespace: oc get pods,services NAME READY STATUS RESTARTS AGE pod/elasticsearch-75b6f589cb-c9hbw 1/1 Running 0 10s pod/kibana-746cc75444-nt8pr 1/1 Running 0 10s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/elasticsearch ClusterIP 172.30.143.158 <none> 9200/TCP,9300/TCP 10s service/kibana NodePort 172.30.18.210 <none> 5601:32394/TCP 10s So based on the service shown above and our project (namespace) named weblogic, the URL for Elasticsearch will be elasticsearch.weblogic.svc.cluster.local:9200. We will need this URL later. Install the operator Now we are ready to install the operator. In the 2.0 release, we use Helm to install the operator. So first we need to download Helm and set up Tiller on our OpenShift cluster (if you have not already installed it). Helm provide installation instructions on their site. I just downloaded the latest release, unzipped it, and made helm executable. Before we install Tiller, let’s create a cluster role binding to make sure the default service account in the kube-system namespace (which tiller will run under) has the cluster-admin role, which it will need to install and manage the operation. cat << EOF | oc apply -f - apiVersion: authorization.openshift.io/v1 kind: ClusterRoleBinding metadata: name: tiller-cluster-admin roleRef: name: cluster-admin subjects: - kind: ServiceAccount name: default namespace: kube-system userNames: - system:serviceaccount:kube-system:default EOF Now we can execute helm init to install tiller on the OpenShift cluster. Check it was successful with this command: oc get deploy -n kube-system NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE tiller-deploy 1 1 1 1 18s When you install the operator you can either pass the configuration parameters into Helm on the command line, or if you prefer, you can store them in a YAML file and pass that file in. I like to store them in a file. There is a sample provided, so we can just make a copy and update it with our details. cp kubernetes/charts/weblogic-operator/values.yaml my-operator-values.yaml Here are the updates we need to make: Set the domainNamespaces parameter to include just weblogic, i.e. the project (namespace) that we created to install WebLogic in. domainNamespaces: - “weblogic” Set the image parameter to match the name of the image you pulled from Docker Hub or built yourself. If you just create the image pull secret then use the value I have shown here: # image specifies the docker image containing the operator code. image: “oracle/weblogic-kubernetes-operator:2.0” Set the imagePullSecrets list to include the secret we created earlier. If you did not create the secret you can leave this commented out. imagePullSecrets: - name: “docker-store-secret” Set the elkIntegrationEnabled parameter to true. # elkIntegrationEnabled specifies whether or not Elastic integration is enabled. elkIntegrationEnabled: true Set the elasticSearchHost to the address of the Elasticsearch server that we set up earlier. # elasticSearchHost specifies the hostname of where elasticsearch is running. # This parameter is ignored if 'elkIntegrationEnabled' is false. elasticSearchHost: “elasticsearch.weblogic.svc.cluster.local” Now we can use helm to install the operator with this command. Notice that I pass in the name of my parameters YAML file in the --values option: helm install kubernetes/charts/weblogic-operator \ --name weblogic-operator \ --namespace weblogic \ --values my-operator-values.yaml \ --wait This command will wait until the operator starts up successfully. If it has to pull the image, that will obviously take a little while, but if this command does not finish in a minute or so, then it is probably stuck. You can send it to the background and start looking around to see what went wrong. Most often it will be a problem pulling the image. If you see the pod has status ImagePullBackOff then OpenShift was not able to pull the image. You can verify the pod was created with this command: oc get pods NAME READY STATUS RESTARTS AGE elasticsearch-75b6f589cb-c9hbw 1/1 Running 0 2h kibana-746cc75444-nt8pr 1/1 Running 0 2h weblogic-operator-54d99679f-dkg65 1/1 Running 0 48s View the operator logs in Kibana Now we have the operator running, let's take a look at the logs in Kibana. We installed Kibana earlier. Let's expose Kibana outside our cluster: oc expose service kibana route.route.openshift.io/kibana exposed You can check this worked with these commands: oc get routes NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD kibana kibana-weblogic.sub11201828382.certificationvc.oraclevcn.com kibana 5601 None oc get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE elasticsearch ClusterIP 172.30.143.158 <none> 9200/TCP,9300/TCP 2h internal-weblogic-operator-svc ClusterIP 172.30.252.148 <none> 8082/TCP 7m kibana NodePort 172.30.18.210 <none> 5601:32394/TCP 2h Now you should be able to access Kibana using the OpenShift front-end address and the node port for the Kibana service. In my case the node port is 32394 and my OpenShift server is accessible to me as openshift so I would use the address https://openshift:32394. You will see a page like this one: The initial Kibana page Click on the “Create” button, then click on the "Discover" option in the menu on the left hand side. Now hover over the entries for level and log in the field list, and click on the "Add" button that appears next to each one. Now you should have a nice log screen like this: The operator logs in Kibana Great! We have the operator installed. Now we are ready to move on to create some WebLogic domains. Prepare Docker images to run the domain Now we have some choices to make. There are two main ways to run WebLogic in Docker - we can use a standard Docker image which contains the WebLogic binaries but keep the domain configuration, applications, etc., outside the image, for example in a persistent volume; or we can create Docker images with both the WebLogic binaries and the domain burnt into them. There are advantages and disadvantages to both approaches, so it really depends on how we want to treat our domain. The first approach is good if you just want to run WebLogic in Kubernetes but you still want to use the admin console and WLST and so on to manage it. The second approach is better if you want to drive everything from a CI/CD pipeline where you do not mutate the running environment, but instead you update the source and then build new images and roll the environment to uptake them. A number of these kinds of considerations are listed here. For the sake of this post, let's use the "domain in image" option (the second approach). So we will need a base WebLogic image with the necessary patches installed, and then we will create our domain on top of that. Let's create a domain with a web application deployed in it, so that we have something to use to test our load balancing configuration and scaling later on. The easiest way to get the base image is to grab it from Oracle using this command: docker pull store/oracle/weblogic:12.2.1.3 The standard WebLogic Server 12.2.1.3.0 image from Docker Hub has the necessary patches already installed. It is worth knowing how to install patches, in case you need some additional one-off patches. If you are not interested in that, skip forward to here. (Optional) Manually creating a patched WebLogic image Here is an example Dockerfile that we can use to install the necessary patches. You can modify this to add any additional one-off patches that you need. Follow that pattern already there to copy the patch into the container, apply it, and then remove the temporary files after you are done. # --------------------------------------------- # Install patches to run WebLogic on Kubernetes # --------------------------------------------- # Start with an unpatched WebLogic 12.2.1.3.0 Docker image FROM your/weblogic-image:12.2.1.3 MAINTAINER Mark Nelson <mark.x.nelson@oracle.com> # We need patch 29135930 to run WebLogic on Kubernetes # We will also also install the latest PSU which is 28298734 # That prereqs a newer version of OPatch, which is provided by 28186730 ENV PATCH_PKG0="p28186730_139400_Generic.zip" ENV PATCH_PKG2="p28298734_122130_Generic.zip" ENV PATCH_PKG3="p29135930_12213181016_Generic.zip" # Copy the patches into the container COPY $PATCH_PKG0 /u01/ COPY $PATCH_PKG2 /u01/ COPY $PATCH_PKG3 /u01/ # Install the psmisc package which is a prereq for 28186730 USER root RUN yum -y install psmisc # Install the three patches we need - do it all in one command to # minimize the number of layers and the size of the resulting image. # Also run opatch cleanup and remove temporary files. USER oracle RUN cd /u01 && \ $JAVA_HOME/bin/jar xf /u01/$PATCH_PKG0 && \ $JAVA_HOME/bin/java -jar /u01/6880880/opatch_generic.jar \ -silent oracle_home=/u01/oracle -ignoreSysPrereqs && \ echo "opatch updated" && \ sleep 5 && \ cd /u01 && \ $JAVA_HOME/bin/jar xf /u01/$PATCH_PKG2 && \ cd /u01/28298734 && \ $ORACLE_HOME/OPatch/opatch apply -silent && \ cd /u01 && \ $JAVA_HOME/bin/jar xf /u01/$PATCH_PKG3 && \ cd /u01/29135930 && \ $ORACLE_HOME/OPatch/opatch apply -silent && \ $ORACLE_HOME/OPatch/opatch util cleanup -silent && \ rm /u01/$PATCH_PKG0 && \ rm /u01/$PATCH_PKG2 && \ rm /u01/$PATCH_PKG3 && \ rm -rf /u01/6880880 && \ rm -rf /u01/28298734 && \ rm -rf /u01/29135930 WORKDIR ${ORACLE_HOME} CMD ["/u01/oracle/createAndStartEmptyDomain.sh"] This Dockerfile assumes the patch archives are available in the same directory. You would need to download the patches from My Oracle Support and then you can build the image with this command: docker build -t my-weblogic-base-image:12.2.1.3.0 . Creating the image with the domain in it I am going to use the WebLogic Deploy Tooling to define my domain. If you are not familiar with this tool, you might want to check it out! It lets you define your domain declaratively instead of writing custom WLST scripts. For just one domain, maybe not such a big deal, but if you need to create a lot of domains it is pretty useful. It also lets you parameterize them, and it can introspect existing domains to create a model and associated artifacts. You can also use it to "move" domains from place to place, say from an on-premises install to Kubernetes, and you can change the version of WebLogic on the way without needing to worry about differences in WLST from version to version - it takes care of all that for you. Of course, we don't need all those features for what we need to do here, but it is good to know they are there for when you might need them! I created a GitHub repository with my domain model here. You can just clone this repository and then run the commands below to download the WebLogic Deploy Tooling and then build the domain in a new Docker image that we will tag my-domain1-image:1.0: git clone https://github.com/markxnelson/simple-sample-domain cd simple-sample-domain curl -Lo weblogic-deploy.zip https://github.com/oracle/weblogic-deploy-tooling/releases/download/weblogic-deploy-tooling-0.14/weblogic-deploy.zip # make sure JAVA_HOME is set correctly, and `mvn` is on your PATH ./build-archive.sh ./quickBuild.sh I won't go into all the nitty gritty details of how this works, that's a subject for another post (if you are interested, take a look at the documentation in the GitHub project), but take a look at the simple-toplogy.yaml file to get a feel for what is happening: domainInfo: AdminUserName: '@@FILE:/u01/oracle/properties/adminuser.properties@@' AdminPassword: '@@FILE:/u01/oracle/properties/adminpass.properties@@' topology: Name: '@@PROP:DOMAIN_NAME@@' AdminServerName: '@@PROP:ADMIN_NAME@@' ProductionModeEnabled: '@@PROP:PRODUCTION_MODE_ENABLED@@' Log: FileName: '@@PROP:DOMAIN_NAME@@.log' Cluster: '@@PROP:CLUSTER_NAME@@': DynamicServers: ServerTemplate: '@@PROP:CLUSTER_NAME@@-template' CalculatedListenPorts: false ServerNamePrefix: '@@PROP:MANAGED_SERVER_NAME_BASE@@' DynamicClusterSize: '@@PROP:CONFIGURED_MANAGED_SERVER_COUNT@@' MaxDynamicClusterSize: '@@PROP:CONFIGURED_MANAGED_SERVER_COUNT@@' Server: '@@PROP:ADMIN_NAME@@': ListenPort: '@@PROP:ADMIN_PORT@@' NetworkAccessPoint: T3Channel: ListenPort: '@@PROP:T3_CHANNEL_PORT@@' PublicAddress: '@@PROP:T3_PUBLIC_ADDRESS@@' PublicPort: '@@PROP:T3_CHANNEL_PORT@@' ServerTemplate: '@@PROP:CLUSTER_NAME@@-template': ListenPort: '@@PROP:MANAGED_SERVER_PORT@@' Cluster: '@@PROP:CLUSTER_NAME@@' appDeployments: Application: # Quote needed because of hyphen in string 'test-webapp': SourcePath: 'wlsdeploy/applications/test-webapp.war' Target: '@@PROP:CLUSTER_NAME@@' ModuleType: war StagingMode: nostage PlanStagingMode: nostage As you can see it is all parameterized. Most of those properties are defined in properties/docker-build/domain.properties: # These variables are used for substitution in the WDT model file. # Any port that will be exposed through Docker is put in this file. # The sample Dockerfile will get the ports from this file and not the WDT model. DOMAIN_NAME=domain1 ADMIN_PORT=7001 ADMIN_NAME=admin-server ADMIN_HOST=domain1-admin-server MANAGED_SERVER_PORT=8001 MANAGED_SERVER_NAME_BASE=managed-server- CONFIGURED_MANAGED_SERVER_COUNT=2 CLUSTER_NAME=cluster-1 DEBUG_PORT=8453 DEBUG_FLAG=false PRODUCTION_MODE_ENABLED=true JAVA_OPTIONS=-Dweblogic.StdoutDebugEnabled=false T3_CHANNEL_PORT=30012 T3_PUBLIC_ADDRESS=openshift CLUSTER_ADMIN=cluster-1,admin-server On lines 10-17 we are defining a cluster named cluster-1 with two dynamic servers in it. On 18-25 we are defining the admin server. And on 30-38 we are defining an application that we want deployed. This is a simple web application that prints out the IP address of the managed server it is running on. Here is the main page of that application: <%@ page import="java.net.UnknownHostException" %> <%@ page import="java.net.InetAddress" %> <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %> <%@page contentType="text/html" pageEncoding="UTF-8"%> <!DOCTYPE html> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <c:url value="/res/styles.css" var="stylesURL"/> <link rel="stylesheet" href="${stylesURL}" type="text/css"> <title>Test WebApp</title> </head> <body> <% String hostname, serverAddress; hostname = "error"; serverAddress = "error"; try { InetAddress inetAddress; inetAddress = InetAddress.getLocalHost(); hostname = inetAddress.getHostName(); serverAddress = inetAddress.toString(); } catch (UnknownHostException e) { e.printStackTrace(); } %> <li>InetAddress: <%=serverAddress %> <li>InetAddress.hostname: <%=hostname %> </body> </html> The source code for the web application is in the test-webapp directory. We will use this application later to verify that scaling and load balancing is working as we expect. So now we have a Docker image with our custom domain in it, and the WebLogic Server binaries and the patches we need. So we are ready to deploy it! Create the WebLogic domain First, we need to create a Kubernetes secret with the WebLogic credentials in it. This is used by the operator to start the domain. You can use the sample provided here to create the secret: ./create-weblogic-credentials.sh \ -u weblogic \ -p welcome1 \ -d domain1 \ -n weblogic Next, we need to create the domain custom resource. To do this, we prepare a Kubernetes YAML file as follows. I have removed the comments to make this more readable, you can find a sample here which has extensive comments to explain how to create these files: apiVersion: "weblogic.oracle/v2" kind: Domain metadata: name: domain1 namespace: weblogic labels: weblogic.resourceVersion: domain-v2 weblogic.domainUID: domain1 spec: domainHome: /u01/oracle/user_projects/domains/domain1 domainHomeInImage: true image: "my-domain1-image:1.0" imagePullPolicy: "IfNotPresent" webLogicCredentialsSecret: name: domain1-weblogic-credentials includeServerOutInPodLog: true serverStartPolicy: "IF_NEEDED" serverPod: annotations: openshift.io/scc: anyuid env: - name: JAVA_OPTIONS value: "-Dweblogic.StdoutDebugEnabled=false" - name: USER_MEM_ARGS value: "-Xms64m -Xmx256m " adminServer: serverStartState: "RUNNING" adminService: channels: - channelName: default nodePort: 30701 - channelName: T3Channel clusters: - clusterName: cluster-1 serverStartState: "RUNNING" replicas: 2 Now we can use this file to create the domain custom resource, using the following command: oc apply -f domain.yaml You can verify it was created, and view the resource that was created with these commands: oc get domains oc describe domain domain1 The operator will notice this new domain custom resource and it will react accordingly. In this case, since we have asked for the admin server and the servers in the the cluster to come to the "RUNNING" state (in lines 27 and 25 above) the operator will start up the admin server first, and then both managed servers. You can watch this happen using this command: oc get pods -w This will print out the current pods, and then update every time there is a change in status. You can hit Ctrl-C to exit from the command when you have seen enough. The operator also creates services for the admin server, each managed server and the cluster. You can see the services with this command: oc get services You will notice a service called domain1-admin-server-external which is used to expose the admin server's default channel outside of the cluster, to allow us to access the admin console and to use WLST. We need to tell OpenShift to make this service available externally by creating a route with this command: oc expose service domain1-admin-server-external --port=default This will expose that service on the NodePort it declared. Verify access to the WebLogic administration console and WLST Now you can start a browser and point it to any one of your worker nodes and use the NodePort from the service (30701 in the example above) to access the admin console. For me, since I have an entry in my /etc/hosts for my OpenShift server, this address is http://openshift:30701/console. You can log in to the admin console and use it as normal. You might like to navigate into "Deployments" to verify that our web application is there: Viewing the test application in the WebLogic admin console You might also like to go to the "Server" page to validate that you can see all of the managed servers: Viewing the managed servers in the WebLogic admin console We can also use WLST against the domain, if desired. To do this, just start up WLST as normal on your client machine and then use the OpenShift server address and the NodePort to form the t3 URL. Using the example above, my URL is t3://openshift:30701: ~/wls/oracle_common/common/bin/wlst.sh Initializing WebLogic Scripting Tool (WLST) ... Jython scans all the jar files it can find at first startup. Depending on the system, this process may take a few minutes to complete, and WLST may not return a prompt right away. Welcome to WebLogic Server Administration Scripting Shell Type help() for help on available commands wls:/offline> connect('weblogic','welcome1','t3://openshift:30701') Connecting to t3://openshift:30701 with userid weblogic ... Successfully connected to Admin Server "admin-server" that belongs to domain "domain1". Warning: An insecure protocol was used to connect to the server. To ensure on-the-wire security, the SSL port or Admin port should be used instead. wls:/domain1/serverConfig/> ls('Servers') dr-- admin-server dr-- managed-server-1 dr-- managed-server-2 wls:/domain1/serverConfig/> You can use WLST as normal, either interactively, or you can run scripts. Keep in mind though, that since you have your domain burnt into the image, when you restart the pods, any changes you made with WLST would be lost. If you want to make permanent changes, you would need to include the WLST scripts in the image building process and then re-run it to build a new version of the image. Of course, if you have chosen to put your domain in peristent storage instead of burning it into the image, this caveat would not apply. Set up a route to expose the application publicly Now, let's expose our web application outside the OpenShift cluster. To do this, we are going to want to set up a load balancer to distribute requests across all of the managed servers, and then expose the load balancer endpoint. We can use the provided sample to install the Traefik load balancer using the following command: helm install stable/traefik \ --name traefik-operator \ --namespace traefik \ --values kubernetes/samples/charts/traefik/values.yaml \ --set "kubernetes.namespaces={traefik,weblogic}" \ --wait Make sure you include the weblogic namespace so the Traefik ingress controller knows to load balance ingresses in our namespace. Next, we need to create the ingress object. We can also do this with the provided sample using this command: helm install kubernetes/samples/charts/ingress-per-domain \ --name domain1-ingress \ --namespace weblogic \ --set wlsDomain.domainUID=domain1 \ --set traefik.hostname=domain1.org Note that you would set the hostname to your real DNS hostname when you do this for real. In this example, I am just using a made up hostname. Test scaling and load balancing Now we can hit the web application to verify the load balancing is working. You can hit it from a browser, but in that case session affinity will kick in, so you will likely see a response from the same managed server over and over again. If you use curl though, you should see it round robin. You can run curl in a loop using this command: while true do sleep 2 curl -v -H 'host: domain1.org' http://openshift:30305/testwebapp/ done The web application just prints out the name and IP address of the managed server. So you should see the output alternate between all of the managed servers in sequence. Now, let's scale the cluster down and see what happens. To initiate scaling, we can just edit the domain custom resource with this command: oc edit domain domain1 This will open the domain custom resource in an editor. Find the entry for cluster-1 and underneath that the replicas entry: clusters: - clusterName: cluster-1 clusterService: annotations: {} labels: {} replicas: 4 You can change the replicas to another value, for example 2, and then save and exit. The operator will notice this change and will react by gracefully shutting down two of the managed servers. You can watch this happen with the command: oc get pods -w You will also notice in the other window where you have curl running that those two managed servers no longer get requests. You will also notice that there are not failed requests - the servers are removed from the domain1-cluster-cluster-1 service early so they will not receive requests and lead to a connection refused or timeout. The ingress and the load balancer automatically adjust. Once the scaling is finished, you might want to scale back up to 4 and watch the operation in reverse. Conclusion Well at this point we have our custom WebLogic domain, with our own configuration and applications deployed, running on OpenShift under the control of the operator. We have seen how we can access the admin console, how to use WSLT, how to set up load balancing and expose applications outside the OpenShift cluster, and how to control scaling. Here are a few screenshots from the OpenShift console showing what we have done: The overview page Drilling down to the admin server pod The monitoring page

In this post I am going to walk through setting up and using WebLogic on OpenShift, using the Oracle WebLogic Kubernetes Operator. My starting point is the OpenShift Container Platform server that I...

The WebLogic Server

Voyager/HAProxy as Load Balancer to Weblogic Domains in Kubernetes

Overview Load balancing is a widely-used technology to build scalable and resilient applications. The major function of load balancing is to monitor servers and distribute network traffic among multiple servers, for example, web applications, databases. For containerized applications running on Kubernetes, load balancing is also a necessity. In the WebLogic Kubernetes Operator version 1.0 we have added support for Voyager/HAProxy. We enhanced the script create-weblogic-domain.sh to provide out-of-the-box support for Voyager/HAProxy. The script supports load balancing to servers of a single WebLogic domain/cluster. This blog describes how to configure Voyager/HAProxy to expand load balancing support to applications deployed to multiple WebLogic domains in Kubernetes. Basics of Voyager/HAProxy If you are new to HAProxy and Voyager, it's worth spending some time learning the basics of HAProxy and Voyager. HAProxy is free, open source software that provides a high availability load balancer and proxy server for TCP and HTTP-based applications. It's well known for being fast and efficient (in terms of processor speed and memory usage). See Starter Guide of HAProxy. Voyager is a HAProxy backed Ingress controller (refer to the Kubernetes documents about Ingress). After installed in a Kubernetes cluster, the Voyager operator watches for Kubernetes Ingress resources and Voyager’s own Ingress CRD and automatically creates, updates, and deletes HAProxy instances accordingly. See voyager overview to understand how the Voyager operator works. Running WebLogic Domains in Kubernetes Check out the project wls-operator-quickstart from GitHub to your local environment. This project helps you set up WebLogic Operator and domains with minimal manual steps. Please complete the steps in the 'Pre-Requirements' section of the README to set up your local environment. With the help of the wls-operator-quickstart project, we want to set up two WebLogic domains running on Kubernetes using the WebLogic Kubernetes Operator, each in its own namespace: The domain named 'domain1' is running in the namespace 'default' which has one cluster 'cluster-1' and the cluster contains two Managed Servers, 'domain1-managed-server1' and 'domain1-managed-server2'. The domain named 'domain2' is running in the namespace 'test1' which has one cluster 'cluster-1' and the cluster contains two Managed Servers, 'domain2-managed-server1' and 'domain2-managed-server2'. A web application 'testwebapp.war' is deployed separately to the cluster in both domain1 and domain2. This web application has a default page which displays the information about which the Managed Server is processing the HTTP request. Use the following steps to prepare the WebLogic domains which are the back ends to the HAProxy: # change directory to root folder of wls-operator-quickstart $ cd xxx/wls-operator-quickstart # Build and deploy weblogic operator $ ./operator.sh create # Create domain1. Change value of `loadBalancer` to `NONE` in domain1-inputs.yaml before run. $ ./domain.sh create # Create domain2. Change value of `loadBalancer` to `NONE` in domain2-inputs.yaml before run. $ ./domain.sh create -d domain2 -n test1 # Install Voyager $ kubectl create namespace voyager $ curl -fsSL https://raw.githubusercontent.com/appscode/voyager/6.0.0/hack/deploy/voyager.sh \ | bash -s -- --provider=baremetal --namespace=voyager Check the status of the WebLogic domains, as follows: # Check status of domain1 $ kubectl get all NAME READY STATUS RESTARTS AGE pod/domain1-admin-server 1/1 Running 0 5h pod/domain1-managed-server1 1/1 Running 0 5h pod/domain1-managed-server2 1/1 Running 0 5h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/domain1-admin-server NodePort 10.105.135.58 <none> 7001:30705/TCP 5h service/domain1-admin-server-extchannel-t3channel NodePort 10.111.9.15 <none> 30015:30015/TCP 5h service/domain1-cluster-cluster-1 ClusterIP 10.108.34.66 <none> 8001/TCP 5h service/domain1-managed-server1 ClusterIP 10.107.185.196 <none> 8001/TCP 5h service/domain1-managed-server2 ClusterIP 10.96.86.209 <none> 8001/TCP 5h service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5h # Verify web app in domain1 via running curl on admin server pod to access the cluster service $ kubectl -n default exec -it domain1-admin-server -- curl http://domain1-cluster-cluster-1:8001/testwebapp/ # Check status of domain2 $ kubectl -n test1 get all NAME READY STATUS RESTARTS AGE pod/domain2-admin-server 1/1 Running 0 5h pod/domain2-managed-server1 1/1 Running 0 5h pod/domain2-managed-server2 1/1 Running 0 5h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/domain2-admin-server NodePort 10.97.77.35 <none> 7001:30701/TCP 5h service/domain2-admin-server-extchannel-t3channel NodePort 10.98.239.28 <none> 30012:30012/TCP 5h service/domain2-cluster-cluster-1 ClusterIP 10.102.228.204 <none> 8001/TCP 5h service/domain2-managed-server1 ClusterIP 10.96.59.190 <none> 8001/TCP 5h service/domain2-managed-server2 ClusterIP 10.101.102.102 <none> 8001/TCP 5h # Verify the web app in domain2 via running curl in admin server pod to access the cluster service $ kubectl -n test1 exec -it domain2-admin-server -- curl http://domain2-cluster-cluster-1:8001/testwebapp/ After both WebLogic domains are running on Kubernetes, I will demonstrate two approaches that use different HAProxy features to set up Voyager as a single entry point to the two WebLogic domains. Using Host Name-Based Routing Create the Ingress resource file 'voyager-host-routing.yaml' which contains an Ingress resource using host name-based routing. apiVersion: voyager.appscode.com/v1beta1 kind: Ingress metadata: name: hostname-routing namespace: default annotations: ingress.appscode.com/type: 'NodePort' ingress.appscode.com/stats: 'true' ingress.appscode.com/affinity: 'cookie' spec: rules: - host: domain1.org http: nodePort: '30305' paths: - backend: serviceName: domain1-cluster-cluster-1 servicePort: '8001' - host: domain2.org http: nodePort: '30305' paths: - backend: serviceName: domain2-cluster-cluster-1.test1 servicePort: '8001' Then deploy the YAML file using`kubectl create -f voyager-host-routing.yaml`. Testing Load Balancing with Host Name-Based Routing To make host name-based routing work, you need to set up virtual hosting which usually involves DNS changes. For demonstration purposes, we will use curl commands to simulate load balancing with host name-based routing. # Verify load balancing on domain1 $ curl --silent -H 'host: domain1.org' http://${HOSTNAME}:30305/testwebapp/ | grep InetAddress.hostname <li>InetAddress.hostname: domain1-managed-server1 $ curl --silent -H 'host: domain1.org' http://${HOSTNAME}:30305/testwebapp/ | grep InetAddress.hostname <li>InetAddress.hostname: domain1-managed-server2 # Verify load balancing on domain2 $ curl --silent -H 'host: domain2.org' http://${HOSTNAME}:30305/testwebapp/ | grep InetAddress.hostname <li>InetAddress.hostname: domain2-managed-server1 $ curl --silent -H 'host: domain2.org' http://${HOSTNAME}:30305/testwebapp/ | grep InetAddress.hostname <li>InetAddress.hostname: domain2-managed-server2 The result is: If host name 'domain1.org' is specified, the request will be processed by Managed Servers in domain1. If host name 'domain2.org' is specified, the request will be processed by Managed Servers in domain2. Using Path-Based Routing and URL Rewriting In this section we use path-based routing with URL rewriting to achieve the same behavior as host name-based routing. Create the Ingress resource file 'voyager-path-routing.yaml'. apiVersion: voyager.appscode.com/v1beta1 kind: Ingress metadata: name: path-routing namespace: default annotations: ingress.appscode.com/type: 'NodePort' ingress.appscode.com/stats: 'true' ingress.appscode.com/rewrite-target: "/testwebapp" spec: rules: - host: '*' http: nodePort: '30307' paths: - path: /domain1 backend: serviceName: domain1-cluster-cluster-1 servicePort: '8001' - path: /domain2 backend: serviceName: domain2-cluster-cluster-1.test1 servicePort: '8001' Then deploy the YAML file using `kubectl create -f voyager-path-routing.yaml`. Verify Load Balancing with Path-Based Routing To verify the load balancing result, we use the curl command. Another approach is to access the URL from a web browser directly.  # Verify load balancing on domain1 $ curl --silent http://${HOSTNAME}:30307/domain1/ | grep InetAddress.hostname <li>InetAddress.hostname: domain1-managed-server1 $ curl --silent http://${HOSTNAME}:30307/domain1/ | grep InetAddress.hostname <li>InetAddress.hostname: domain1-managed-server2 # Verify load balancing on domain2 $ curl --silent http://${HOSTNAME}:30307/domain2/ | grep InetAddress.hostname <li>InetAddress.hostname: domain2-managed-server1 $ curl --silent http://${HOSTNAME}:30307/domain2/ | grep InetAddress.hostname <li>InetAddress.hostname: domain2-managed-server2 You can see that we specify different URLs to dispatch traffic to different WebLogic domains with path-based routing. With the URL rewriting feature, we eventually access the web application with the same context path in each domain. Cleanup After you finish your exercise using the instructions in this blog, you may want to clean up all the resources created in Kubernetes.  # Cleanup voyager ingress resources $ kubectl delete -f voyager-host-routing.yaml $ kubectl delete -f voyager-path-routing.yaml   # Uninstall Voyager $ curl -fsSL https://raw.githubusercontent.com/appscode/voyager/6.0.0/hack/deploy/voyager.sh \ | bash -s -- --provider=baremetal --namespace=voyager --uninstall --purge # Delete wls domains and wls operator $ cd <QUICKSTART_ROOT> $ ./domain.sh delete --clean-all $ ./domain.sh delete -d domain2 -n test1 --clean-all $ ./operator.sh delete Summary In this blog, we describe how to set up a Voyager load balancer to provide high availability load balancing and a proxy server for TCP and HTTP-based requests to applications deployed in WebLogic Server domains. The samples provided in this blog describe how to use Voyager as a single point in front of multiple WebLogic domains. We provide examples to show you how to use Voyager features like host name-based routing, path-based routing, and URL rewriting. I hope you find this blog helpful and try using Voyager in your WebLogic on Kubernetes deployments.

Overview Load balancing is a widely-used technology to build scalable and resilient applications. The major function of load balancing is to monitor servers and distribute network traffic among...

Make WebLogic Domain Provisioning and Deployment Easy!

The Oracle WebLogic Deploy Tooling (WDT) makes the automation of WebLogic Server domain provisioning and applications deployment easy. Instead of writing WLST scripts that need to be maintained, WDT creates a declarative, metadata model that describes the domain, applications, and the resources used by the applications.  This metadata model makes it easy to provision, deploy, and perform domain lifecycle operations in a repeatable fashion, which makes it perfect for the Continuous Delivery of applications. The WebLogic Deploy Tooling provides maximum flexibility by supporting a wide range of WebLogic Server versions from 10.3.6 to 12.2.1.3. WDT supports both Windows and UNIX operating systems, and provides the following benefits: Introspects a WebLogic domain into a metadata model (JSON or YAML). Creates a new WebLogic Server domain using a metadata model and allows version control of the domain configuration. Updates the configuration of an existing WebLogic Server domain, deploys applications and resources into the domain. Allows runtime alterations to the metadata model (also referred as the model) before applying it. Allows the same model to apply to multiple environments by accepting value placeholders provided in a separate property file. Passwords can be encrypted directly in the model or property file. Supports a sparse model so that the model only needs to describe what is required for the specific operation without describing other artifacts. Provides easy validation of the model content and verification that its related artifacts are well-formed. Allows automation and continuous delivery of deployments. Facilitates Lift and Shift of the domain into other environments, like Docker images and Kubernetes.   Currently, the project provides five single-purpose tools, all exposed as shell scripts: The Create Domain Tool (createDomain) understands how to create a domain and populate the domain with all the resources and applications specified in the model. The Update Domain Tool (updateDomain) understands how to update an existing domain and populate the domain with all the resources and applications specified in the model, either in offline or online mode. The Deploy Applications Tool (deployApps) understands how to add resources and applications to an existing domain, either in offline or online mode. The Discover Domain Tool (discoverDomain) introspects an existing domain and creates a model file describing the domain and an archive file of the binaries deployed to the domain. The Encrypt Model Tool (encryptModel) encrypts the passwords in a model (or its variable file) using a user-provided passphrase. The Validate Model Tool (validateModel) provides both standalone validation of a model as well as model usage information to help users write or edit their models. The WebLogic on Docker and Kubernetes projects take advantage of WDT to provision WebLogic domains and deploy applications inside of a Docker image or in a Kubernetes persistent volume (PV).  The Discover and Create Domain Tools enable us to take a domain running in a non-Docker/Kubernetes environment and lift and shift them into these environments. Docker/Kubernetes environments require a specific WebLogic configuration (for example, network). The Validate Model Tool provides mechanisms to validate the WebLogic configuration and ensure that it can run in these environments. We have created a sample in the GitHub WebLogic Docker project, https://github.com/oracle/docker-images/tree/master/OracleWebLogic/samples/12213-domain-wdt, to demonstrate how to provision a WebLogic 12.2.1.3 domain inside of a Docker image.  The WebLogic domain is configured with a WebLogic dynamic cluster, a simple application deployed, and a data source that connects to an Oracle database running inside of a container. This sample includes a basic WDT model, simple-topology.yaml, that describes the intended configuration of the domain within the Docker image. WDT models can be created and modified using a text editor, following the format and rules described in the README file for the WDT project in GitHub.  Alternatively, the model can be created using the WDT Discover Domain Tool to introspect an already existing WebLogic domain. Domain creation may require the deployment of applications and libraries. This is accomplished by creating a ZIP archive with a specific structure, then referencing those items in the model. This sample creates and deploys a simple ZIP archive, containing a small application WAR. The archive is built in the sample directory prior to creating the Docker image. How to Build and Run The image is based on a WebLogic Server 12.2.1.3 image in the docker-images repository. Follow the README in https://github.com/oracle/docker-images/tree/master/OracleWebLogic/dockerfiles/12.2.1.3 to build the WebLogic Server install image to your local repository. The WebLogic Deploy Tool installer is used to build this sample WebLogic domain image. This sample deploys a simple, one-page web application contained in a ZIP archive, archive.zip. This archive needs to be built before building the domain Docker image.     $ ./build-archive.sh Before the domain image is built, we also need the WDT model simple-topology.yaml.  If you want to customize this WebLogic domains sample, you can either use an editor to change the model simple-topology.yaml or use the WDT Discover Domain Tool to introspect an already existing WebLogic domain. The image below shows you a snippet of the sample WDT model simple-topology.yaml where the database password will be encrypted and replaced by the value in the properties file we will supply before running the WebLogic domain containers. To build this sample, run:     $ docker build \     --build-arg WDT_MODEL=simple-topology.yaml \     --build-arg WDT_ARCHIVE=archive.zip \     --force-rm=true \     -t 12213-domain-wdt . You should have a WebLogic domain image in your local repository. How to Run In this sample, each of the Managed Servers in the WebLogic domain have a data source deployed to them. We want to connect the data source to an Oracle database running in a container. Pull the Oracle database image from the Docker Store or the Oracle Container Registry into your local repository.     $ docker pull container-registry.oracle.com/database/enterprise:12.2.0.1 Create the Docker network for the WLS and database containers to run:     $ docker network create -d bridge SampleNET Run the Database Container To create a database container, use the environment file below to set the database name, domain, and feature bundle. The example environment file, properties/env.txt, is:     DB_SID=InfraDB     DB_PDB=InfraPDB1     DB_DOMAIN=us.oracle.com     DB_BUNDLE=basic Run the database container by running the following Docker command:     $ docker run -d --name InfraDB --network=SampleNET  \     -p 1521:1521 -p 5500:5500  \     --env-file /Users/mriccell/Docker/docker-images/OracleWebLogic/samples/12213-domain-wdt/properties/env.txt  \     -it --shm-size="8g"  \     container-registry.oracle.com/database/enterprise:12.2.0.1     Verify that the database is running and healthy. The STATUS field shows (healthy) in the output of docker ps.  The database is created with the default password 'Oradoc_db1'. To change the database password, you must use sqlplus.  To run sqlplus pull the Oracle Instant Client from the Oracle Container Registry or the Docker Store, and run a sqlplus container with the following command:     $ docker run -ti --network=SampleNET --rm \     store/oracle/database-instantclient:12.2.0.1 \     sqlplus  sys/Oradoc_db1@InfraDB:1521/InfraDB.us.oracle.com \     AS SYSDBA       SQL> alter user system identified by dbpasswd container=all; Make sure you add the new database password 'dbpasswd ' in the properties file, properties/domain.properties DB_PASSWORD. Verify that you can connect to the database:     $ docker exec -ti InfraDB  \     /u01/app/oracle/product/12.2.0/dbhome_1/bin/sqlplus \     system/dbpasswd@InfraDB:1521/InfraPDB1.us.oracle.com       SQL> select * from Dual; Run the WebLogic Domain You will need to modify the domain.properties file in properties/domain.properties with all the parameters required to run the WebLogic domain, including the database password. To start the containerized Administration Server, run:     $ docker run -d --name wlsadmin --hostname wlsadmin \     --network=SampleNET -p 7001:7001 \     -v /Users/mriccell/Docker/docker-images/OracleWebLogic/samples/12213-domain-wdt/properties:/u01/oracle/properties  \     12213-domain-wdt To start a containerized Managed Server (ms-1) to self-register with the Administration Server above, run:     $ docker run -d --name ms-1 --link wlsadmin:wlsadmin \     --network=SampleNET -p 9001:9001 \     -v /Users/mriccell/Docker/docker-images/OracleWebLogic/samples/12213-domain-wdt/properties:/u01/oracle/properties  \     -e MS_NAME=ms-1 12213-domain-wdt startManagedServer.sh To start an additional Managed Server (in this example, ms-2), run:     $ docker run -d --name ms-2 --link wlsadmin:wlsadmin  \     --network=SampleNET -p 9002:9001 \     -v /Users/mriccell/Docker/docker-images/OracleWebLogic/samples/12213-domain-wdt/properties:/u01/oracle/properties  \     -e MS_NAME=ms-2 12213-domain-wdt startManagedServer.sh The above scenario will give you a WebLogic domain with a dynamic cluster set up on a single host environment. Let’s verify that the servers are running and that the data source connects to the Oracle database running in the container. Invoke the WLS Administration Console by entering this URL in your browser, ‘http://localhost:7001/console’. Log in using the credentials you provided in the domain.properties file. The WebLogic Deploy Tooling simplifies the provisioning of WebLogic domains, deployment of applications, and the resources these applications need.  The WebLogic on Docker/Kubernetes projects take advantage of these tools to simplify the provisioning of domains inside of an image or persisted to a Kubernetes persistent volume.  We have released the General Availability version of the WebLogic Kubernetes Operator which simplifies the management of WebLogic domains in Kubernetes. Soon we will release the WebLogic Kubernetes Operator version 2.0 which provides enhancements to the management of WebLogic domains. We continue to provide tooling to make it simple to provision, deploy, and manage WebLogic domains with the goal of providing the greatest degree of flexibility for where these domains can run.  We hope this sample is helpful to anyone wanting to use the WebLogic Deploy Tooling for provisioning and deploying WebLogic Server domains, and we look forward to your feedback.

The Oracle WebLogic Deploy Tooling (WDT) makes the automation of WebLogic Server domain provisioning and applications deployment easy. Instead of writing WLST scripts that need to be maintained, WDT...

WebLogic Server JTA in a Kubernetes Environment

This blog post describes WebLogic Server global transactions running in a Kubernetes environment.  First, we’ll review how the WebLogic Server Transaction Manager (TM) processes distributed transactions.  Then, we’ll walk through an example transactional application that is deployed to WebLogic Server domains running in a Kubernetes cluster with the WebLogic Kubernetes Operator.    WebLogic Server Transaction Manager Introduction The WebLogic Server Transaction Manager (TM) is the transaction processing monitor implementation in WebLogic Server that supports the Java Enterprise Edition (Java EE) Java Transaction API (JTA).  A Java EE application uses JTA to manage global transactions to ensure that changes to resource managers, such as databases and messaging systems, either complete as a unit, or are undone. This section provides a brief introduction to the WebLogic Server TM, specifically around network communication and related configuration, which will be helpful when we examine transactions in a Kubernetes environment.  There are many TM features, optimizations, and configuration options that won’t be covered in this article.  Refer to the following WebLogic Server documentation for additional details: ·      For general information about the WebLogic Server TM, see the WebLogic Server JTA documentation. ·      For detailed information regarding the Java Transaction API, see the Java EE JTA Specification. How Transactions are Processed in WebLogic Server To get a basic understanding of how the WebLogic Server TM processes transactions, we’ll look at a hypothetical application.  Consider a web application consisting of a servlet that starts a transaction, inserts a record in a database table, and sends a message to a Java Messaging Service (JMS) queue destination.  After updating the JDBC and JMS resources, the servlet commits the transaction.   The following diagram shows the server and resource transaction participants. Transaction Propagation The transaction context builds up state as it propagates between servers and as resources are accessed by the application.  For this application, the transaction context at commit time would look something like the following. Server participants, identified by domain name and server name, have an associated URL that is used for internal TM communication.  These URLs are typically derived from the server’s default network channel, or default secure network channel.  The transaction context also contains information about which server participants have javax.transaction.Synchronization callbacks registered.  The JTA synchronization API is a callback mechanism where the TM invokes the Synchronization.beforeCompletion() method before commencing two-phase commit processing for a transaction.   The Synchronization.afterCompletion(int status) method is invoked after transaction processing is complete with the final status of the transaction (for example, committed, rolled back, and such).  Transaction Completion When the TM is instructed to commit the transaction, the TM takes over and coordinates the completion of the transaction.  One of the server participants is chosen as the transaction coordinator to drive the two-phase commit protocol.  The coordinator instructs the remaining subordinate servers to process registered synchronization callbacks, and to prepare, commit, or rollback resources.  The TM communication channels used to coordinate the example transaction are illustrated in the following diagram. The dashed-line arrows represent asynchronous RMI calls between the coordinator and subordinate servers.  Note that the Synchronization.beforeCompletion() communication can take place directly between subordinate servers.  It is also important to point out that application communication is conceptually separate from the internal TM communication, as the TM may establish network channels that were not used by the application to propagate the transaction.  The TM could use different protocols, addresses, and ports depending on how the server default network channels are configured. Configuration Recommendations There are a few TM configuration recommendations related to server network addresses, persistent storage, and server naming. Server Network Addresses As mentioned previously, server participants locate each other using URLs included in the transaction context.  It is important that the network channels used for TM URLs be configured with address names that are resolvable after node, pod, or container restarts where IP addresses may change.  Also, because the TM requires direct server-to-server communication, cluster or load-balancer addresses that resolve to multiple IP addresses should not be used. Transaction Logs The coordinating server persists state in the transaction log (TLOG) that is used for transaction recovery processing after failure.  Because a server instance may relocate to another node, the TLOG needs to reside in a network/replicated file system (for example, NFS, SAN, and such) or in a highly-available database such as Oracle RAC.  For additional information, refer to the High Availability Guide. Cross-Domain Transactions Transactions that span WebLogic Server domains are referred to as cross-domain transactions.  Cross-domain transactions introduce additional configuration requirements, especially when the domains are connected by a public network. Server Naming The TM identifies server participants using a combination of the domain name and server name.  Therefore, each domain should be named uniquely to prevent name collisions.  Server participant name collisions will cause transactions to be rolled back at runtime. Security Server participants that are connected by a public network require the use of secure protocols (for example, t3s) and authorization checks to verify that the TM communication is legitimate.  For the purpose of this demonstration, we won’t cover these topics in detail.  For the Kubernetes example application, all TM communication will take place on the private Kubernetes network and will use a non-SSL protocol. For details on configuring security for cross-domain transactions, refer to the Configuring Secure Inter-Domain and Intra-Domain Transaction Communication chapter of the Fusion Middleware Developing JTA Applications for Oracle WebLogic Server documentation. WebLogic Server on Kubernetes In an effort to improve WebLogic Server integration with Kubernetes, Oracle has released the open source WebLogic Kubernetes Operator.   The WebLogic Kubernetes Operator supports the creation and management of WebLogic Server domains, integration with various load balancers, and additional capabilities.  For details refer to the GitHub project page, https://github.com/oracle/weblogic-kubernetes-operator, and the related blogs at https://blogs.oracle.com/weblogicserver/how-to-weblogic-server-on-kubernetes. Example Transactional Application Walkthrough To illustrate running distributed transactions on Kubernetes, we’ll step through a simplified transactional application that is deployed to multiple WebLogic Server domains running in a single Kubernetes cluster.  The environment that I used for this example is a Mac running Docker Edge v18.05.0-ce that includes Kubernetes v1.9.6. After installing and starting Docker Edge, open the Preferences page, increase the memory available to Docker under the Advanced tab (~8 GiB) and enable Kubernetes under the Kubernetes tab.  After applying the changes, Docker and Kubernetes will be started.  If you are behind a firewall, you may also need to add the appropriate settings under the Proxies tab.  Once running, you should be able to list the Kubernetes version information. $ kubectl version Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:21:50Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:13:31Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} To keep the example file system path names short, the working directory for input files, operator sources and binaries, persistent volumes, and such, are created under $HOME/k8sop. You can reference the directory using the environment variable $K8SOP. $ export K8SOP=$HOME/k8sop $ mkdir $K8SOP Install the WebLogic Kubernetes Operator The next step will be to build and install the weblogic-kubernetes-operator image.  Refer to the installation procedures at https://github.com/oracle/weblogic-kubernetes-operator/blob/master/site/installation.md.  Note that for this example, the weblogic-kubernetes-operator GitHub project will be cloned under the $K8SOP/src directory ($K8SOP/src/weblogic-kubernetes-operator).  Also note that when building the Docker image, use the tag “local” in place of “some-tag” that’s specified in the installation docs. $ mkdir $K8SOP/src $ cd $K8SOP/src $ git clone https://github.com/oracle/weblogic-kubernetes-operator.git $ cd weblogic-kubernetes-operator $ mvn clean install $ docker login $ docker build -t weblogic-kubernetes-operator:local --no-cache=true . After building the operator image, you should see it in the local registry. $ docker images weblogic-kubernetes-operator REPOSITORY                     TAG                 IMAGE ID            CREATED             SIZE weblogic-kubernetes-operator   local               42a5f70c7287        10 seconds ago      317MB The next step will be to deploy the operator to the Kubernetes cluster.  For this example, we will modify the create-weblogic-operator-inputs.yaml file to add an additional target namespace (weblogic) and specify the correct operator image name. Attribute Value targetNamespaces default,weblogic weblogicOperatorImage weblogic-kubernetes-operator:local javaLoggingLevel WARNING   Save the modified input file under $K8SOP/create-weblogic-operator-inputs.yaml. Then run the create-weblogic-operator.sh script, specifying the path to the modified create-weblogic-operator.yaml input file and the path of the operator output directory. $ cd $K8SOP $ mkdir weblogic-kubernetes-operator $ $K8SOP/src/weblogic-kubernetes-operator/kubernetes/create-weblogic-operator.sh -i $K8SOP/create-weblogic-operator-inputs.yaml -o $K8SOP/weblogic-kubernetes-operator When the script completes you will be able to see the operator pod running. $ kubectl get po -n weblogic-operator NAME                                 READY     STATUS    RESTARTS   AGE weblogic-operator-6dbf8bf9c9-prhwd   1/1       Running   0          44s WebLogic Domain Creation The procedures for creating a WebLogic Server domain are documented at https://github.com/oracle/weblogic-kubernetes-operator/blob/master/site/creating-domain.md.  Follow the instructions for pulling the WebLogic Server image from the Docker store into the local registry.  You’ll be able to pull the image after accepting the license agreement on the Docker store. $ docker login $ docker pull store/oracle/weblogic:12.2.1.3 Next, we’ll create a Kubernetes secret to hold the administrative credentials for our domain (weblogic/weblogic1). $ kubectl -n weblogic create secret generic domain1-weblogic-credentials --from-literal=username=weblogic --from-literal=password=weblogic1 The persistent volume location for the domain will be under $K8SOP/volumes/domain1. $ mkdir -m 777 -p $K8SOP/volumes/domain1 Then we’ll customize the $K8SOP/src/weblogic-kubernetes-operator/kubernetes/create-weblogic-domain-inputs.yaml example input file, modifying the following attributes: Attribute Value weblogicDomainStoragePath {full path of $HOME}/k8sop/volumes/domain1 domainName domain1 domainUID domain1 t3PublicAddress {your-local-hostname} exposeAdminT3Channel true exposeAdminNodePort true namespace weblogic   After saving the updated input file to $K8SOP/create-domain1.yaml, invoke the create-weblogic-domain.sh script as follows. $ $K8SOP/src/weblogic-kubernetes-operator/kubernetes/create-weblogic-domain.sh -i $K8SOP/create-domain1.yaml -o $K8SOP/weblogic-kubernetes-operator After the create-weblogic-domain.sh script completes, Kubernetes will start up the Administration Server and the clustered Managed Server instances.  After a while, you can see the running pods. $ kubectl get po -n weblogic NAME                                        READY     STATUS    RESTARTS   AGE domain1-admin-server                        1/1       Running   0          5m domain1-cluster-1-traefik-9985d9594-gw2jr   1/1       Running   0          5m domain1-managed-server1                     1/1       Running   0          3m domain1-managed-server2                     1/1       Running   0          3m Now we will access the running Administration Server using the WebLogic Server Administration Console to check the state of the domain using the URL http://localhost:30701/console with the credentials weblogic/weblogic1.  The following screen shot shows the Servers page. The Administration Console Servers page shows all of the servers in domain1.  Note that each server has a listen address that corresponds to a Kubernetes service name that is defined for the specific server instance.  The service name is derived from the domainUID (domain1) and the server name. These address names are resolvable within the Kubernetes namespace and, along with the listen port, are used to define each server’s default network channel.  As mentioned previously, the default network channel URLs are propagated with the transaction context and are used internally by the TM for distributed transaction coordination. Example Application Now that we have a WebLogic Server domain running under Kubernetes, we will look at an example application that can be used to verify distributed transaction processing.  To make the example as simple as possible, it will be limited in scope to transaction propagation between servers and synchronization callback processing.  This will allow us to verify inter-server transaction communication without the need for resource manager configuration and the added complexity of writing JDBC or JMS client code. The application consists of two main components: a servlet front end and an RMI remote object.  The servlet processes a GET request that contains a list of URLs.  It starts a global transaction and then invokes the remote object at each of the URLs.  The remote object simply registers a synchronization callback that prints a message to stdout in the beforeCompletion and afterCompletion callback methods.  Finally, the servlet commits the transaction and sends a response containing information about each of the RMI calls and the outcome of the global transaction. The following diagram illustrates running the example application on the domain1 servers in the Kubernetes cluster.  The servlet is invoked using the Administration Server’s external port.  The servlet starts the transaction, registers a local synchronization object, and invokes the register operation on the Managed Servers using their Kubernetes internal URLs:  t3://domain1-managed-server1:8001 and t3://domain1-managed-server2:8001. TxPropagate Servlet As mentioned above, the servlet starts a transaction and then invokes the RemoteSync.register() remote method on each of the server URLs specified.  Then the transaction is committed and the results are returned to the caller. package example;   import java.io.IOException; import java.io.PrintWriter;   import javax.naming.Context; import javax.naming.InitialContext; import javax.naming.NamingException; import javax.servlet.ServletException; import javax.servlet.annotation.WebServlet; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import javax.transaction.HeuristicMixedException; import javax.transaction.HeuristicRollbackException; import javax.transaction.NotSupportedException; import javax.transaction.RollbackException; import javax.transaction.SystemException;   import weblogic.transaction.Transaction; import weblogic.transaction.TransactionHelper; import weblogic.transaction.TransactionManager;   @WebServlet("/TxPropagate") public class TxPropagate extends HttpServlet {   private static final long serialVersionUID = 7100799641719523029L;   private TransactionManager tm = (TransactionManager)       TransactionHelper.getTransactionHelper().getTransactionManager();     protected void doGet(HttpServletRequest request,       HttpServletResponse response) throws ServletException, IOException {     PrintWriter out = response.getWriter();       String urlsParam = request.getParameter("urls");     if (urlsParam == null) return;     String[] urls = urlsParam.split(",");       try {       RemoteSync forward = (RemoteSync)           new InitialContext().lookup(RemoteSync.JNDINAME);       tm.begin();       Transaction tx = (Transaction) tm.getTransaction();       out.println("<pre>");       out.println(Utils.getLocalServerID() + " started " +           tx.getXid().toString());       out.println(forward.register());       for (int i = 0; i < urls.length; i++) {         out.println(Utils.getLocalServerID() + " " + tx.getXid().toString() +             " registering Synchronization on " + urls[i]);         Context ctx = Utils.getContext(urls[i]);         forward = (RemoteSync) ctx.lookup(RemoteSync.JNDINAME);         out.println(forward.register());       }       tm.commit();       out.println(Utils.getLocalServerID() + " committed " + tx);     } catch (NamingException | NotSupportedException | SystemException |         SecurityException | IllegalStateException | RollbackException |         HeuristicMixedException | HeuristicRollbackException e) {       throw new ServletException(e);     }   } Remote Object The RemoteSync remote object contains a single method, register, that registers a javax.transaction.Synchronization callback with the propagated transaction context. RemoteSync Interface The following is the example.RemoteSync remote interface definition. package example;   import java.rmi.Remote; import java.rmi.RemoteException;   public interface RemoteSync extends Remote {   public static final String JNDINAME = "propagate.RemoteSync";   String register() throws RemoteException; } RemoteSyncImpl Implementation The example.RemoteSyncImpl class implements the example.RemoteSync remote interface and contains an inner synchronization implementation class named SynchronizationImpl.  The beforeCompletion and afterCompletion methods simply write a message to stdout containing the server ID (domain name and server name) and the Xid string representation of the propagated transaction. The static main method instantiates a RemoteSyncImpl object and binds it into the server’s local JNDI context.  The main method is invoked when the application is deployed using the ApplicationLifecycleListener, as described below. package example;   import java.rmi.RemoteException;   import javax.naming.Context; import javax.transaction.RollbackException; import javax.transaction.Synchronization; import javax.transaction.SystemException;   import weblogic.jndi.Environment; import weblogic.transaction.Transaction; import weblogic.transaction.TransactionHelper;   public class RemoteSyncImpl implements RemoteSync {     public String register() throws RemoteException {     Transaction tx = (Transaction)         TransactionHelper.getTransactionHelper().getTransaction();     if (tx == null) return Utils.getLocalServerID() +         " no transaction, Synchronization not registered";     try {       Synchronization sync = new SynchronizationImpl(tx);       tx.registerSynchronization(sync);       return Utils.getLocalServerID() + " " + tx.getXid().toString() +           " registered " + sync;     } catch (IllegalStateException | RollbackException |         SystemException e) {       throw new RemoteException(           "error registering Synchronization callback with " +       tx.getXid().toString(), e);     }   }     class SynchronizationImpl implements Synchronization {     Transaction tx;         SynchronizationImpl(Transaction tx) {       this.tx = tx;     }         public void afterCompletion(int arg0) {       System.out.println(Utils.getLocalServerID() + " " +           tx.getXid().toString() + " afterCompletion()");     }       public void beforeCompletion() {       System.out.println(Utils.getLocalServerID() + " " +           tx.getXid().toString() + " beforeCompletion()");     }   }     // create and bind remote object in local JNDI   public static void main(String[] args) throws Exception {     RemoteSyncImpl remoteSync = new RemoteSyncImpl();     Environment env = new Environment();     env.setCreateIntermediateContexts(true);     env.setReplicateBindings(false);     Context ctx = env.getInitialContext();     ctx.rebind(JNDINAME, remoteSync);     System.out.println("bound " + remoteSync);   } } Utility Methods The Utils class contains a couple of static methods, one to get the local server ID and another to perform an initial context lookup given a URL.  The initial context lookup is invoked under the anonymous user.  These methods are used by both the servlet and the remote object. package example;   import java.util.Hashtable;   import javax.naming.Context; import javax.naming.InitialContext; import javax.naming.NamingException;   public class Utils {     public static Context getContext(String url) throws NamingException {     Hashtable env = new Hashtable();     env.put(Context.INITIAL_CONTEXT_FACTORY,         "weblogic.jndi.WLInitialContextFactory");     env.put(Context.PROVIDER_URL, url);     return new InitialContext(env);   }     public static String getLocalServerID() {     return "[" + getDomainName() + "+"         + System.getProperty("weblogic.Name") + "]";   }     private static String getDomainName() {     String domainName = System.getProperty("weblogic.Domain");     if (domainName == null) domainName = System.getenv("DOMAIN_NAME");     return domainName;   } } ApplicationLifecycleListener When the application is deployed to a WebLogic Server instance, the lifecycle listener preStart method is invoked to initialize and bind the RemoteSync remote object. package example;   import weblogic.application.ApplicationException; import weblogic.application.ApplicationLifecycleEvent; import weblogic.application.ApplicationLifecycleListener;   public class LifecycleListenerImpl extends ApplicationLifecycleListener {     public void preStart (ApplicationLifecycleEvent evt)       throws ApplicationException {     super.preStart(evt);     try {       RemoteSyncImpl.main(null);     } catch (Exception e) {       throw new ApplicationException(e);     }   } } Application Deployment Descriptor The application archive contains the following weblogic-application.xml deployment descriptor to register the ApplicationLifecycleListener object. <?xml version = '1.0' ?> <weblogic-application xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.bea.com/ns/weblogic/weblogic-application http://www.bea.com/ns/weblogic/weblogic-application/1.0/weblogic-application.xsd" xmlns="http://www.bea.com/ns/weblogic/weblogic-application">    <listener>     <listener-class>example.LifecycleListenerImpl</listener-class>     <listener-uri>lib/remotesync.jar</listener-uri>   </listener> </weblogic-application> Deploying the Application The example application can be deployed using a number of supported deployment mechanisms (refer to https://blogs.oracle.com/weblogicserver/best-practices-for-application-deployment-on-weblogic-server-running-on-kubernetes-v2).  For this example, we’ll deploy the application using the WebLogic Server Administration Console. Assume that the application is packaged in an application archive named txpropagate.ear.  First, we’ll copy txpropagate.ear to the applications directory under the domain1 persistent volume location ($K8SOP/volumes/domain1/applications).  Then we can deploy the application from the Administration Console’s Deployment page. Note that the path of the EAR file is /shared/applications/txpropagate.ear within the Administration Server’s container, where /shared is mapped to the persistent volume that we created at $K8SOP/volumes/domain1. Deploy the EAR as an application and then target it to the Administration Server and the cluster. On the next page, click Finish to deploy the application.  After the application is deployed, you’ll see its entry in the Deployments table. Running the Application Now that we have the application deployed to the servers in domain1, we can run a distributed transaction test.  The following CURL operation invokes the servlet using the load balancer port 30305 for the clustered Managed Servers and specifies the URL of managed-server1. $ curl http://localhost:30305/TxPropagate/TxPropagate?urls=t3://domain1-managed-server1:8001 <pre> [domain1+managed-server2] started BEA1-0001DE85D4EE [domain1+managed-server2] BEA1-0001DE85D4EEC47AE630 registered example.RemoteSyncImpl$SynchronizationImpl@562a85bd [domain1+managed-server2] BEA1-0001DE85D4EEC47AE630 registering Synchronization on t3://domain1-managed-server1:8001 [domain1+managed-server1] BEA1-0001DE85D4EEC47AE630 registered example.RemoteSyncImpl$SynchronizationImpl@585ff41b [domain1+managed-server2] committed Xid=BEA1-0001DE85D4EEC47AE630(844351585),Status=Committed,numRepliesOwedMe=0,numRepliesOwedOthers=0,seconds since begin=0,seconds left=120,useSecure=false,SCInfo[domain1+managed-server2]=(state=committed),SCInfo[domain1+managed-server1]=(state=committed),properties=({ackCommitSCs={managed-server1+domain1-managed-server1:8001+domain1+t3+=true}, weblogic.transaction.partitionName=DOMAIN}),OwnerTransactionManager=ServerTM[ServerCoordinatorDescriptor=(CoordinatorURL=managed-server2+domain1-managed-server2:8001+domain1+t3+ CoordinatorNonSecureURL=managed-server2+domain1-managed-server2:8001+domain1+t3+ coordinatorSecureURL=null, XAResources={WSATGatewayRM_managed-server2_domain1},NonXAResources={})],CoordinatorURL=managed-server2+domain1-managed-server2:8001+domain1+t3+) The following diagram shows the application flow. Looking at the output, we see that the servlet request was dispatched on managed-server2 where it started the transaction BEA1-0001DE85D4EE.   [domain1+managed-server2] started BEA1-0001DE85D4EE The local RemoteSync.register() method was invoked which registered the callback object SynchronizationImpl@562a85bd. [domain1+managed-server2] BEA1-0001DE85D4EEC47AE630 registered example.RemoteSyncImpl$SynchronizationImpl@562a85bd The servlet then invoked the register method on the RemoteSync object on managed-server1, which registered the synchronization object SynchronizationImpl@585ff41b. [domain1+managed-server2] BEA1-0001DE85D4EEC47AE630 registering Synchronization on t3://domain1-managed-server1:8001 [domain1+managed-server1] BEA1-0001DE85D4EEC47AE630 registered example.RemoteSyncImpl$SynchronizationImpl@585ff41b Finally, the servlet committed the transaction and returned the transaction’s string representation (typically used for TM debug logging). [domain1+managed-server2] committed Xid=BEA1-0001DE85D4EEC47AE630(844351585),Status=Committed,numRepliesOwedMe=0,numRepliesOwedOthers=0,seconds since begin=0,seconds left=120,useSecure=false,SCInfo[domain1+managed-server2]=(state=committed),SCInfo[domain1+managed-server1]=(state=committed),properties=({ackCommitSCs={managed-server1+domain1-managed-server1:8001+domain1+t3+=true}, weblogic.transaction.partitionName=DOMAIN}),OwnerTransactionManager=ServerTM[ServerCoordinatorDescriptor=(CoordinatorURL=managed-server2+domain1-managed-server2:8001+domain1+t3+ CoordinatorNonSecureURL=managed-server2+domain1-managed-server2:8001+domain1+t3+ coordinatorSecureURL=null, XAResources={WSATGatewayRM_managed-server2_domain1},NonXAResources={})],CoordinatorURL=managed-server2+domain1-managed-server2:8001+domain1+t3+) The output shows that the transaction was committed, that it has two server participants (managed-server1 and managed-server2) and that the coordinating server (managed-server2) is accessible using t3://domain1-managed-server2:8001. We can also verify that the registered synchronization callbacks were invoked by looking at the output of admin-server and managed-server1.  The .out files for the servers can be found under the persistent volume of the domain. $ cd $K8SOP/volumes/domain1/domain/domain1/servers $ find . -name '*.out' -exec grep -H BEA1-0001DE85D4EE {} ';' ./managed-server1/logs/managed-server1.out:[domain1+managed-server1] BEA1-0001DE85D4EEC47AE630 beforeCompletion() ./managed-server1/logs/managed-server1.out:[domain1+managed-server1] BEA1-0001DE85D4EEC47AE630 afterCompletion() ./managed-server2/logs/managed-server2.out:[domain1+managed-server2] BEA1-0001DE85D4EEC47AE630 beforeCompletion() ./managed-server2/logs/managed-server2.out:[domain1+managed-server2] BEA1-0001DE85D4EEC47AE630 afterCompletion() To summarize, we were able to process distributed transactions within a WebLogic Server domain running in a Kubernetes cluster without having to make any changes.  The WebLogic Kubernetes Operator domain creation process provided all of the Kubernetes networking and WebLogic Server configuration necessary to make it possible.  The following command lists the Kubernetes services defined in the weblogic namespace. $ kubectl get svc -n weblogic NAME                                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)           AGE domain1-admin-server                        NodePort    10.102.156.32    <none>        7001:30701/TCP    11m domain1-admin-server-extchannel-t3channel   NodePort    10.99.21.154     <none>        30012:30012/TCP   9m domain1-cluster-1-traefik                   NodePort    10.100.211.213   <none>        80:30305/TCP      11m domain1-cluster-1-traefik-dashboard         NodePort    10.108.229.66    <none>        8080:30315/TCP    11m domain1-cluster-cluster-1                   ClusterIP   10.106.58.103    <none>        8001/TCP          9m domain1-managed-server1                     ClusterIP   10.108.85.130    <none>        8001/TCP          9m domain1-managed-server2                     ClusterIP   10.108.130.92    <none>        8001/TCP We were able to access the servlet through the Traefik NodePort service using port 30305 on localhost.  From inside the Kubernetes cluster, the servlet is able to access other WebLogic Server instances using their service names and ports.  Because each server’s listen address is set to its corresponding Kubernetes service name, the addresses are resolvable from within the Kubernetes namespace even if a server’s pod is restarted and assigned a different IP address. Cross-Domain Transactions Now we’ll look at extending the example to run across two WebLogic Server domains.  As mentioned in the TM overview section, cross-domain transactions can require additional configuration to properly secure TM communication.  However, for our example, we will keep the configuration as simple as possible.  We’ll continue to use a non-secure protocol (t3), and the anonymous user, for both application and internal TM communication. First, we’ll need to create a new domain (domain2) in the same Kubernetes namespace as domain1 (weblogic).  Before generating domain2 we need to create a secret for the domain2 credentials (domain2-weblogic-credentials) in the weblogic namespace and a directory for the persistent volume ($K8SOP/volumes/domain2). Next, modify the create-domain1.yaml file, changing the following attribute values, and save the changes to a new file named create-domain2.yaml. Attribute Value domainName domain2 domainUID domain2 weblogicDomainStoragePath {full path of $HOME}/k8sop/volumes/domain2 weblogicCredentialsSecretName domain2-weblogic-credentials t3ChannelPort 32012 adminNodePort 32701 loadBalancerWebPort 32305 loadBalancerDashboardPort 32315   Now we’re ready to invoke the create-weblogic-domain.sh script with the create-domain2.yaml input file. $ $K8SOP/src/weblogic-kubernetes-operator/kubernetes/create-weblogic-domain.sh -i $K8SOP/create-domain2.yaml -o $K8SOP/weblogic-kubernetes-operator After the create script completes successfully, the servers in domain2 will start and, using the readiness probe, report that they have reached the RUNNING state. $ kubectl get po -n weblogic NAME                                         READY     STATUS    RESTARTS   AGE domain1-admin-server                         1/1       Running   0          27m domain1-cluster-1-traefik-9985d9594-gw2jr    1/1       Running   0          27m domain1-managed-server1                      1/1       Running   0          25m domain1-managed-server2                      1/1       Running   0          25m domain2-admin-server                         1/1       Running   0          5m domain2-cluster-1-traefik-5c49f54689-9fzzr   1/1       Running   0          5m domain2-managed-server1                      1/1       Running   0          3m domain2-managed-server2                      1/1       Running   0          3m After deploying the application to the servers in domain2, we can invoke the application and include the URLs for the domain2 Managed Servers.  $ curl http://localhost:30305/TxPropagate/TxPropagate?urls=t3://domain2-managed-server1:8001,t3://domain2-managed-server2:8001 <pre> [domain1+managed-server1] started BEA1-0001144553CC [domain1+managed-server1] BEA1-0001144553CC5D73B78A registered example.RemoteSyncImpl$SynchronizationImpl@2e13aa23 [domain1+managed-server1] BEA1-0001144553CC5D73B78A registering Synchronization on t3://domain2-managed-server1:8001 [domain2+managed-server1] BEA1-0001144553CC5D73B78A registered example.RemoteSyncImpl$SynchronizationImpl@68d4c2d6 [domain1+managed-server1] BEA1-0001144553CC5D73B78A registering Synchronization on t3://domain2-managed-server2:8001 [domain2+managed-server2] BEA1-0001144553CC5D73B78A registered example.RemoteSyncImpl$SynchronizationImpl@1ae87d94 [domain1+managed-server1] committed Xid=BEA1-0001144553CC5D73B78A(1749245151),Status=Committed,numRepliesOwedMe=0,numRepliesOwedOthers=0,seconds since begin=0,seconds left=120,useSecure=false,SCInfo[domain1+managed-server1]=(state=committed),SCInfo[domain2+managed-server1]=(state=committed),SCInfo[domain2+managed-server2]=(state=committed),properties=({ackCommitSCs={managed-server2+domain2-managed-server2:8001+domain2+t3+=true, managed-server1+domain2-managed-server1:8001+domain2+t3+=true}, weblogic.transaction.partitionName=DOMAIN}),OwnerTransactionManager=ServerTM[ServerCoordinatorDescriptor=(CoordinatorURL=managed-server1+domain1-managed-server1:8001+domain1+t3+ CoordinatorNonSecureURL=managed-server1+domain1-managed-server1:8001+domain1+t3+ coordinatorSecureURL=null, XAResources={WSATGatewayRM_managed-server1_domain1},NonXAResources={})],CoordinatorURL=managed-server1+domain1-managed-server1:8001+domain1+t3+) The application flow is shown in the following diagram. In this example, the transaction includes server participants from both domain1 and domain2, and we can verify that the synchronization callbacks were processed on all participating servers. $ cd $K8SOP/volumes $ find . -name '*.out' -exec grep -H BEA1-0001144553CC {} ';' ./domain1/domain/domain1/servers/managed-server1/logs/managed-server1.out:[domain1+managed-server1] BEA1-0001144553CC5D73B78A beforeCompletion() ./domain1/domain/domain1/servers/managed-server1/logs/managed-server1.out:[domain1+managed-server1] BEA1-0001144553CC5D73B78A afterCompletion() ./domain2/domain/domain2/servers/managed-server1/logs/managed-server1.out:[domain2+managed-server1] BEA1-0001144553CC5D73B78A beforeCompletion() ./domain2/domain/domain2/servers/managed-server1/logs/managed-server1.out:[domain2+managed-server1] BEA1-0001144553CC5D73B78A afterCompletion() ./domain2/domain/domain2/servers/managed-server2/logs/managed-server2.out:[domain2+managed-server2] BEA1-0001144553CC5D73B78A beforeCompletion() ./domain2/domain/domain2/servers/managed-server2/logs/managed-server2.out:[domain2+managed-server2] BEA1-0001144553CC5D73B78A afterCompletion() Summary In this article we reviewed, at a high level, how the WebLogic Server Transaction Manager processes global transactions and discussed some of the basic configuration requirements.   We then looked at an example application to illustrate how cross-domain transactions are processed in a Kubernetes cluster.   In future articles we’ll look at more complex transactional use-cases such as multi-node, cross Kubernetes cluster transactions, failover, and such.

This blog post describes WebLogic Server global transactions running in a Kubernetes environment.  First, we’ll review how the WebLogic Server Transaction Manager (TM) processes distributed...

The WebLogic Server

Announcing WebLogic Server Certification on Oracle Cloud Infrastructure Container Engine for Kubernetes

On May 7th we announced the General Availability (GA) version of the WebLogic Kubernetes Operator, including certification of WebLogic Server and Operator configurations running on the Oracle Cloud Infrastructure (OCI).   In this initial announcement, WebLogic Server and Operator OCI certification was provided on Kubernetes clusters created on OCI using the Terraform Kubernetes Installer.    For more details on this announcement, please refer to the announcement blog Announcing General Availability version of the WebLogic Kubernetes Operator. Today we are announcing the additional certification of WebLogic Server and Operator configurations on the Oracle Container Engine for Kubernetes running on OCI, please see blog Kubernetes: A Cloud (and Data Center) Operating System?.  The Oracle Container Engine for Kubernetes is a fully-managed, scalable, and highly available service that you can use to deploy your containerized applications to the cloud.  In the blog How to run WebLogic clusters on the Oracle Cloud Infrastructure Container Engine for Kubernetes, we describe the steps to run a WebLogic domain/cluster managed by the WebLogic Kubernetes Operator running on OCI Container Engine for Kubernetes  with WebLogic and Operator images stored in the OCI Registry. Very soon, we hope to provide an easy way to migrate existing WebLogic Server domains in Kubernetes using the WebLogic Deploy Tooling, add CI/CD of WebLogic deployments on Kubernetes with Oracle Container Pipelines, and add new features and enhancements over time. The WebLogic Server and Operator capabilities described are supported on standard Kuberntes infrastructure with full compatibility between OCI, and other private and public cloud platforms that use Kubernetes.  The Operator, Prometheus Exporter, and WebLogic Deploy Tooling are all being developed in open source.   We are open to your feedback – thanks! Safe Harbor Statement The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions.  The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.

On May 7th we announced the General Availability (GA) version of the WebLogic Kubernetes Operator, including certification of WebLogic Server and Operator configurations running on the Oracle Cloud...

The WebLogic Server

How to run WebLogic clusters on the Oracle Cloud Infrastructure Container Engine for Kubernetes

There are various options for setting up a Kubernetes environment in order to run WebLogic clusters. Oracle supports customers who want to run WebLogic clusters in production or development mode and on Kubernetes clusters on-premises or in the cloud. In this blog, we describe the steps to run a WebLogic cluster using the Oracle Cloud Infrastructure (OCI) Container Engine for Kubernetes. The Kubernetes managed service is fully integrated with the underlying Oracle Cloud Infrastructure (OCI), making it easy to provision a Kubernetes cluster and to provide the required services, such as a load balancer, volumes, and network fabric. Prerequisites: Docker images: WebLogic Server (weblogic-12.2.1.3:latest). WebLogic Kubernetes Operator (weblogic-operator:latest) Traefik Load Balancer (traefik:1.4.5) A workstation with Docker and kubectl, installed and configured. The Oracle Container Engine for Kubernetes on OCI. To setup a Kubernetes managed service on OCI, follow the documentation Overview of Container Engine for Kubernetes. OCI Container Engine for Kubernetes nodes are accessible using ssh. The Oracle Cloud Infrastructure Registry to push the WebLogic Server, Operator, and Load Balancer images. Prepare the WebLogic Kubernetes Operator environment To prepare the environment, we need to: ·Test accessibility and set up the RBAC policy for the OCI Container Engine for the Kubernetes cluster Set up the NFS server Upload the Docker images to the OCI Registry (OCIR) Modify the configuration YAML files to reflect the Docker images’ names in the OCIR Test accessibility and set up the RBAC policy for the OKE cluster To check the accessibility to the OCI Container Engine for Kubernetes nodes, enter the command: kubectl get nodes The output of the command will display the nodes, similar to the following: NAME              STATUS    ROLES     AGE       VERSION 129.146.109.106   Ready     node      5h        v1.9.4 129.146.22.123    Ready     node      5h        v1.9.4 129.146.66.11     Ready     node      5h        v1.9.4 In order to have permission to access the Kubernetes cluster, you need to authorize your OCI account as a cluster-admin on the OCI Container Engine for Kubernetes cluster.  This will require your OCID, which is available on the OCI console page, under your user settings. For example, if your user OCID is ocid1.user.oc1..aaaaaaaac26kw7qvuij7i6fadabklqfb7svyuhpitedmguspv6ht67i5l32q, the command would be: kubectl create clusterrolebinding my-cluster-admin-binding --clusterrole=cluster-admin --user=ocid1.user.oc1..aaaaaaaac26kw7qvuij7i6fadabklqfb7svyuhpitedmguspv6ht67i5l32q Set up the NFS server In the current GA version, the OCI Container Engine for Kubernetes supports network block storage that can be shared across nodes with access permission RWOnce (meaning that only one can write, others can read only). At this time, the WebLogic on Kubernetes domain created by the WebLogic Server Kubernetes Operator, requires a shared file system to store the WebLogic domain configuration, which MUST be accessible from all the pods across the nodes. As a workaround, you need to install an NFS server on one node and share the file system across all the nodes. Note: Currently, we recommend that you use NFS version 3.0 for running WebLogic Server on OCI Container Engine for Kubernetes. During certification, we found that when using NFS 4.0, the servers in the WebLogic domain went into a failed state intermittently. Because multiple threads use NFS (default store, diagnostics store, Node Manager, logging, and domain_home), there are issues when accessing the file store. These issues are removed by changing the NFS to version 3.0. In this demo, the Kubernetes cluster is using nodes with these IP addresses: Node1: 129.146.109.106   Node2: 129.146.22.123   Node3: 129.146.66.11 In the above case, let’s install the NFS server on Node1 with the IP address 129.146.109.106, and use Node2 (IP:129.146.22.123)and Node3 (IP:129.146.66.11) as clients. Log in to each of the nodes using ssh to retrieve the private IP address, by executing the command: ssh -i ~/.ssh/id_rsa opc@[Public IP of Node] ip addr | grep ens3 ~/.ssh/id_rsa is the path to the private ssh RSA key. For example, for Node1: ssh -i ~/.ssh/id_rsa opc@129.146.109.106 ip addr | grep ens3 Retrieve the inet value for each node. For this demo, here is the collected information: Nodes: Public IP Private IP Node1 (NFS Server)   129.146.109.106       10.0.11.3   Node2   129.146.22.123       10.0.11.1   Node3   129.146.66.11     10.0.11.2   Log in using ssh to Node1, and install and set up NFS for Node1 (NFS Server): sudo su - yum install -y nfs-utils mkdir /scratch chown -R opc:opc /scratch Edit the /etc/exports file to add the internal IP addresses of Node2 and Node3: vi /etc/exports /scratch 10.0.11.1(rw) /scratch 10.0.11.2(rw) systemctl restart nfs exit Log in using ssh to Node2: ssh -i ~/.ssh/id_rsa opc@129.146.22.123 sudo su - yum install -y nfs-utils mkdir /scratch Edit the /etc/fstab file to add the internal IP address of Node1: vi /etc/fstab 10.0.11.3:/scratch /scratch nfs nfsvers=3 0 0 mount /scratch exit Repeat the same steps for Node3: ssh -i ~/.ssh/id_rsa opc@129.146.66.11 sudo su - yum install -y nfs-utils mkdir /scratch Edit the /etc/fstab file to add the internal IP address of Node1: vi /etc/fstab 10.0.11.3:/scratch /scratch nfs nfsvers=3 0 0 mount /scratch exit Upload the Docker images to the OCI Registry Build the required Docker images for WebLogic 12.2.1.3 and WebLogic Kubernetes Operator. Pull the Traefik Docker Image from the Docker Hub repository, for example:   docker login docker pull traefik:1.4.5   Tag the Docker images, as follows:   docker tag [Name Of Your Image For Operator] phx.ocir.io/weblogicondocker/weblogic-kubernetes-operator:latest docker tag [Name Of Your Image For WebLogic Domain] phx.ocir.io/weblogicondocker/weblogic:12.2.1.3 docker tag traefik:1.4.5 phx.ocir.io/weblogicondocker/traefik:1.4.5   Generate an authentication token to log in to the phx.ocir.io OCIR Docker repository: Log in to your OCI dashboard. Click ‘User Settings’, then ‘Auth Tokens’ on the left-side menu. Save the generated password in a secured place. Log in to the OCIR Docker registry by entering this command: docker login phx.ocir.io When prompted for your username, enter your OCI tenancy name/oci username. For example: docker login phx.ocir.io Username: weblogicondocker/myusername           Password: Login Succeeded Create a Docker registry secret.  The secret name must consist of lower case alphanumeric characters: kubectl create secret docker-registry <secret_name> --docker-server=<region>.ocir.io --docker-username=<oci_tenancyname>/<oci_username> --docker-password=<auth_token> --docker-email=example_email For example, for the PHX registry create docker secret ocisecret: kubectl create secret docker-registry ocisecret --docker-server=phx.ocir.io --docker-username=weblogicondocker/myusername --docker-password= _b5HiYcRzscbC48e1AZa --docker-email=myusername@oracle.com Push Docker images into OCIR:   docker push phx.ocir.io/weblogicondocker/weblogic-kubernetes-operator:latest docker push phx.ocir.io/weblogicondocker/weblogic:12.2.1.3 docker push phx.ocir.io/weblogicondocker/traefik:1.4.5   Log in to the OCI console and verify the image: Log in to the OCI console. Verify that you are using the correct region, for example, us-phoenix-1. Under Containers, select Registry. The image should be visible on the Registry page. Click on image name, select ‘Actions’ to make it ‘Public’ Modify the configuration YAML files to reflect the Docker image names in the OCIR Our final steps are to customize the parameters in the input files and generate deployment YAML files for the WebLogic cluster, WebLogic Operator, and to use the Traefik load balancer to reflect the image changes and local configuration. We will use the provided open source scripts:  create-weblogic-operator.sh and create-weblogic-domain.sh. Use Git to download the WebLogic Kubernetes Operator project: git clone https://github.com/oracle/weblogic-kubernetes-operator.git Modify the YAML inputs to reflect the image names: cd $SRC/weblogic-kubernetes-operator/kubernetes Change the ‘image’ field to the corresponding Docker repository image name in the OCIR: ./internal/create-weblogic-domain-job-template.yaml:   image: phx.ocir.io/weblogicondocker/weblogic:12.2.1.3 ./internal/weblogic-domain-traefik-template.yaml:      image: phx.ocir.io/weblogicondocker/traefik:1.4.5 ./internal/domain-custom-resource-template.yaml:       image: phx.ocir.io/weblogicondocker/weblogic:12.2.1.3 ./create-weblogic-operator-inputs.yaml:         weblogicOperatorImage: phx.ocir.io/weblogicondocker/weblogic-kubernetes-operator:latest Review and customize the other parameters in the create-weblogic-operator-inputs.yaml and create-weblogic-domain-inputs.yaml files. Check all the available options and descriptions in the installation instructions for the Operator and WebLogic Domain. Here is the list of customized values in the create-weblogic-operator-inputs.yaml file for this demo: targetNamespaces: domain1 weblogicOperatorImage: phx.ocir.io/weblogicondocker/weblogic-kubernetes-operator:latest externalRestOption: SELF_SIGNED_CERT externalSans: IP:129.146.109.106 Here is the list of customized values in the create-weblogic-domain-inputs.yaml file for this demo: domainUID: domain1 t3PublicAddress: 0.0.0.0 exposeAdminNodePort: true namespace: domain1 loadBalancer: TRAEFIK exposeAdminT3Channel: true weblogicDomainStoragePath: /scratch/external-domain-home/pv001 Note: Currently, we recommend that you use Traefik and the Apache HTTP Server load balancers for running WebLogic Server on the OCI Container Engine for Kubernetes. At this time we cannot certify the Voyager HAProxy Ingress Controller due to a lack of support in OKE. The WebLogic domain will use the persistent volume mapped to the path, specified by the parameter weblogicDomainStoragePath. Let’s create the persistent volume directory on the NFS server, Node1, using the command: ssh -i ~/.ssh/id_rsa opc@129.146.109.106 "mkdir -m 777 -p /scratch/external-domain-home/pv001" Our demo domain is configured to be run in the namespace domain1. To create namespace domain1, execute this command: kubectl create namespace domain1 The username and password credentials for access to the Administration Server must be stored in a Kubernetes secret in the same namespace that the domain will run in. The script does not create the secret in order to avoid storing the credentials in a file. Oracle recommends that this command be executed in a secure shell and that the appropriate measures be taken to protect the security of the credentials. To create the secret, issue the following command: kubectl -n NAMESPACE create secret generic SECRET_NAME   --from-literal=username=ADMIN-USERNAME   --from-literal=password=ADMIN-PASSWORD For our demo values: kubectl -n domain1 create secret generic domain1-weblogic-credentials --from-literal=username=weblogic --from-literal=password=welcome1 Finally, run the create script, pointing it at your inputs file and the output directory: ./create-weblogic-operator.sh –i create-weblogic-operator-job-inputs.yaml  -o /path/to/weblogic-operator-output-directory It will create and start all the related operator deployments. Run this command to check the operator pod status: Execute the same command for the WebLogic domain creation: ./create-weblogic-domain.sh –i create-weblogic-domain-job-inputs.yaml  -o /path/to/weblogic-domain-output-directory To check the status of the WebLogic cluster, run this command: bash-4.2$ kubectl get pods -n domain1 Let’s see how the load balancer works. For that, let’s access the WebLogic Server Administration Console and deploy the testwebapp.war application. In the customized inputs for the WebLogic domain, we have specified to expose the AdminNodePort. To review the port number, run this command: Let’s use one of the node’s external IP addresses to access the Administration Console. In our demo, it is http://129.146.109.106:30701/console. Log in to the WebLogic Server Administration Console using the credentials weblogic/welcome1. Click ‘Deployments’, ‘Lock&Edit’, and upload the testwebapp.war application. Select cluster-1 as a target and click ‘Finish’, then ‘Release Configuration’. Select the ‘Control’ tab and click ‘Start serving all requests’. The status of the deployment should change to ‘active’. Let’s demonstrate load balancing HTTP requests using Traefik as Ingress controllers on Kubernetes clusters. To check the NodePort number for the load balancer, run this command:  The Traefik load balancer is running on port 30305. Every time we access the testwebapp application link, http://129.146.22.123:30305/testwebapp/, the application will display the currently used Managed Server’s information. Another load of the same URL, displays the information about Managed Server 1. Because the WebLogic cluster is exposed to the external world and accessible using the external IP addresses of the nodes, the authorized WebLogic user can use the T3 protocol to access all the available WebLogic resources by using WLST commands. With a firewall, you have to run T3 using tunneling with a proxy (use T3 over HTTP; turn on tunneling in the WLS Server and then use the "HTTP" protocol instead of "T3"). See this blog for more details. If you are outside of the corporate network, you can use T3 with no limitations.   Summary In this blog, we demonstrated all the required steps to set up a WebLogic cluster using the OCI Container Engine for Kubernetes that runs on the Oracle Cloud Infrastructure and load balancing for a web application, deployed on the WebLogic cluster. Running WebLogic Server on Kubernetes in OCI Container Engine for Kubernetes enables users to leverage WebLogic Server applications in a managed Kubernetes environment, to integrate WebLogic Server applications with other cloud applications, and to evolve their usage of WebLogic Server and expand their usage of Kubernetes. We are also publishing a series of blog entries that describe in detail, how to run the operator, how to stand up one or more WebLogic domains in Kubernetes, how to scale up or down  a WebLogic cluster manually or automatically using the WebLogic Diagnostics Framework (WLDF) or Prometheus, how the Operator manages load balancing for web applications deployed in WebLogic clusters, and how to provide integration for managing operator logs through Elasticsearch, Logstash, and Kibana.

There are various options for setting up a Kubernetes environment in order to run WebLogic clusters. Oracle supports customers who want to run WebLogic clusters in production or development mode...

The WebLogic Server

Announcing General Availability version of the WebLogic Kubernetes Operator

We are very pleased to announce the release of our General Availability (GA) version of the WebLogic Kubernetes Operator.  The Operator, first released in February as a Technology Preview version, simplifies the creation and management of WebLogic Server 12.2.1.3 domains on Kubernetes.  The GA operator supports additional WebLogic features, and is certified and supported for use in development and production.  Certification includes support for the Operator and WebLogic Server configurations running on the Oracle Cloud Infrastructure (OCI), on Kubernetes clusters created using the Terraform Kubernetes Installer for OCI, and using the Oracle Cloud Infrastructure Registry (OCIR) for storing Operator and WebLogic Server domain images. For additional information about WebLogic on Kubernetes  certification and WebLogic Kubernetes Operator, see Support Doc ID 2349228.1, and reference the announcement blog, WebLogic on Kubernetes Certification. We have developed the Operator to integrate WebLogic Server and Kubernetes, allowing Kubernetes to serve as a container infrastructure hosting WebLogic Server instances. The WebLogic Kubernetes Operator extends Kubernetes to create, configure, and manage a WebLogic domain. Read our prior announcement blog, Announcing WebLogic Kubernetes Operator, and find the WebLogic Kubernetes Operator GitHub project at https://github.com/oracle/weblogic-kubernetes-operator.   Running WebLogic Server on Kubernetes enables users to leverage WebLogic Server applications in Kubernetes environments, to integrate WebLogic Server applications with other cloud applications, and to evolve their usage of WebLogic Server and expand their usage of Kubernetes. The WebLogic Kubernetes Operator allows users to: Simplify WebLogic management in Kubernetes Ensure Kubernetes resources are allocated for WebLogic domains Manage the overall environment, including load balancers, Ingress controllers, network fabric, and security, through Kubernetes APIs Simplify and automate patching and scaling operations Ensure that WebLogic best practices are followed Run WebLogic domains well and securely In this version of the WebLogic Kubernetes Operator and the WebLogic Server Kubernetes certification, we have added the following functionality and support: Support for Kubernetes versions 1.7.5, 1.8.0, 1.9.0, 1.10.0 In our Operator GitHub project, we provide instructions for how to build, test, and publish the Docker image for the Operator directly from Oracle Container Pipelines using the wercker.yml . Support for dynamic clusters, and auto-scaling of a WebLogic Server cluster with dynamic clusters. Please read the blog for details WebLogic Dynamic Cluster on Kubernetes. Support for the Apache HTTP Server and Voyager (HAProxy-backed) Ingress controller running within the Kubernetes cluster for load balancing HTTP requests across WebLogic Server Managed Servers running in clustered configurations. Integration with the Operator automates the configuration of these load balancers.  Find documentation for the Apache HTTP Server and Voyager Ingress Controller. Support for Persistent Volumes (PV) in NFS storage for multi-node environments. In our project, we provide a cheat sheet to configure the NFS volume on OCI, and some important notes about NFS volumes and the WebLogic Server domain in Kubernetes. The  Delete WebLogic domain resources script, which permanently removes the Kubernetes resources for a domain or domains, from a Kubernetes cluster. Please see “Removing a domain” in the README of the Operator project. Improved Prometheus support.   See Using Prometheus to Automatically Scale WebLogic Clusters on Kubernetes. Integration tests posted on our WebLogic Kubernetes Operator GitHub project. Our future plans include, certification of WebLogic Server on Kubernetes running on the OCI Container Engine for Kubernetes, providing an easy way to reprovision and redeploy existing  WebLogic Server domains in Kubernetes using the WebLogic Deploy Tooling, adding CI/CD of WebLogic deployments on Kubernetes with Oracle Container Pipelines, and new features and enhancements over time. Please stay tuned for more information. We hope this announcement is helpful to those of you seeking to deploy WebLogic Server on Kubernetes, and look forward to your feedback.    

We are very pleased to announce the release of our General Availability (GA) version of the WebLogic Kubernetes Operator.  The Operator, first released in February as a Technology Preview version,...

The WebLogic Server

WebLogic Dynamic Clusters on Kubernetes

Overview A WebLogic Server cluster consists of multiple Managed Server instances running simultaneously and working together to provide increased scalability and reliability.  WebLogic Server supports two types of clustering configurations, configured and dynamic clustering.  Configured clusters are created by manually configuring each individual Managed Server instance.  In dynamic clusters, the Managed Server configurations are generated from a single, shared template.  Using a template greatly simplifies the configuration of clustered Managed Servers and allows for dynamically assigning servers to Machine resources, thereby providing a greater utilization of resources with minimal configuration.  With dynamic clusters, when additional server capacity is needed, new server instances can be added to the cluster without having to manually configure them individually. Also, unlike configured clusters, scaling up of dynamic clusters is not restricted to the set of servers defined for a cluster, but can be increased based on runtime demands.   For more information on how to create, configure, and use dynamic clusters in WebLogic Server, see Dynamic Clusters.   Support for Dynamic Clusters by Oracle WebLogic  Kubernetes Operator Previously, the WebLogic Kubernetes Operator supported configured clusters only.  That is, the operator could only manage and scale Managed Servers defined for a configured cluster.  Now, this limitation has been removed. By supporting dynamic clusters, the operator can easily scale the number of Managed Server instances based on a server template instead of requiring that you first manually configure them.   Creating a Dynamic Cluster in a WebLogic Domain in Kubernetes   The WebLogic Server team has been actively working to integrate WebLogic Server in Kubernetes, WebLogic Server Certification on Kubernetes.  The Oracle WebLogic Kubernetes Operator provides a mechanism for creating and managing any number of WebLogic domains, automates domain startup, allows scaling of WebLogic clusters, manages load balancing for web applications deployed in WebLogic clusters, and provides integration with Elasticsearch, Logstash, and Kibana. The operator is currently available as an open source project at https://oracle.github.io/weblogic-kubernetes-operator.  To create a WebLogic domain, the recommended approach is to use the provided create-weblogic-domain.sh script, which automates the creation of a WebLogic domain within a Kubernetes cluster.  The create-weblogic-domain.sh script takes an input file, create-weblogic-domain-inputs.yaml, which specifies the configuration properties for the WebLogic domain. The following parameters of the input file are used when creating a dynamic cluster:   Parameter Definition Default clusterName The name of the WebLogic cluster instance to generate for the domain. cluster-1 clusterType The type of WebLogic cluster. Legal values are "CONFIGURED" or "DYNAMIC". CONFIGURED configuredManagedServerCount   The number of Managed Server instances to generate for the domain. 2 initialManagedServerReplicas The number of Managed Servers to start initially for the domain. 2 managedServerNameBase Base string used to generate Managed Server names.  Used as the server name prefix in a server template for dynamic clusters. managed-server       The following example configuration will create a dynamic cluster named ‘cluster-1’ with four defined Managed Servers (managed-server1 … managed-server4) in which the operator will initially start up two Managed Servers instances, managed-server1and managed-server2:   # Type of WebLogic Cluster # Legal values are "CONFIGURED" or "DYNAMIC" clusterType: DYNAMIC   # Cluster name clusterName: cluster-1   # Number of Managed Servers to generate for the domain configuredManagedServerCount: 4   # Number of Managed Servers to initially start for the domain initialManagedServerReplicas: 2       # Base string used to generate Managed Server names managedServerNameBase: managed-server   To create the WebLogic domain, you simply run the create-weblogic-domain.sh script specifying your input file and output directory for any generated configuration files:   #> create-weblogic-domain.sh  –i create-domain-job-inputs.yaml  -o /path/to/weblogic-domain-output-directory   There are some limitations when creating WebLogic clusters using the create domain script:   The script creates the specified number of Managed Server instances and places them all in one cluster. The script always creates one cluster. Alternatively, you can create a WebLogic domain manually as outlined in Manually Creating a WebLogic Domain.   How WebLogic Kubernetes Operator Manages a Dynamic Cluster   A Kubernetes Operator is “an application-specific controller that extends the Kubernetes API to create, configure, and manage instances of complex applications.”  For more information on operators, see Introducing Operators: Putting Operational Knowledge into Software. The Oracle WebLogic Kubernetes Operator extends Kubernetes to create, configure, and manage any number of WebLogic domains running in a Kubernetes environment.  It provides a mechanism to create domains, automate domain startup, and allow scaling of both configured and dynamic WebLogic clusters. For more details about the WebLogic Kubernetes Operator, see the blog, Announcing the Oracle WebLogic Kubernetes Operator.   Because the WebLogic Kubernetes Operator manages the life cycle of Managed Servers in a Kubernetes cluster, it provides the ability to start up and scale (up or down) WebLogic dynamic clusters. The operator manages the startup of a WebLogic domain based on the settings defined in a Custom Resource Domain (CRD).   The number of WLS pods/Managed Server instances running in a Kubernetes cluster, for a dynamic cluster, is represented by the ‘replicas’ attribute value of the ClusterStartup entry in the following domain custom resource YAML file:   clusterStartup:   - desiredState: "RUNNING"     clusterName: "cluster-1"     replicas: 2     env:     - name: JAVA_OPTIONS       value: "-Dweblogic.StdoutDebugEnabled=false"     - name: USER_MEM_ARGS       value: "-Xms64m -Xmx256m"   For the above example entry, during WebLogic domain startup, the operator would start two pod/Managed Server instances for the dynamic cluster ‘cluster-1’. Details of a domain custom resource YAML file can be found in Starting a WebLogic Domain.   Scaling of WebLogic Dynamic Clusters on Kubernetes   There are several ways to initiate scaling through the operator, including:   On-demand, updating the Custom Resource Domain specification directly (using kubectl). Calling the operator's REST scale API, for example, from curl. Using a WLDF policy rule and script action to call the operator's REST scale API. Using a Prometheus alert action to call the Operator's REST scale API. On-Demand, Updating the Custom Resource Domain Directly Scaling a dynamic cluster can be achieved by editing the Custom Resource Domain directly by using the ‘kubectl edit’ command and modifying the ‘replicas’ attribute value:   #> kubectl edit domain domain1 -n [namespace]   This command will open an editor which will allow you to edit the defined Custom Resource Domain specification.  Once committed, the operator will be notified of the change and will immediately attempt to scale the corresponding dynamic cluster by reconciling the number of running pods/Managed Server instances with the ‘replicas’ value specification.   Calling the Operator's REST Scale API Alternatively, the WebLogic Kubernetes Operator exposes a REST endpoint, with the following URL format, that allows an authorized actor to request scaling of a WebLogic cluster:   http(s)://${OPERATOR_ENDPOINT}/operator/<version>/domains/<domainUID>/clusters/<clusterName>/scale   <version> denotes the version of the REST resource. <domainUID> is the unique ID that will be used to identify this particular domain. This ID must be unique across all domain in a Kubernetes cluster. <clusterName> is the name of the WebLogic cluster instance to be scaled.   For example:   http(s)://${OPERATOR_ENDPOINT}/operator/v1/domains/domain1/clusters/cluster-1/scale The /scale REST endpoint: Accepts an HTTP POST request. The request body supports the JSON "application/json" media type. The request body will be a simple name-value item named managedServerCount: {       ”managedServerCount": 3 }   The managedServerCount value designates the number of WebLogic Server instances to scale to.   Note: An example use of the REST API, using the curl command, can be found in scalingAction.sh.   Using a WLDF Policy Rule and Script Action to Call the Operator's REST Scale API A WebLogic Server dynamic cluster can be automatically scaled by increasing (or decreasing) the number of pods based on resource metrics provided by the WebLogic Diagnostic Framework (WLDF). WLDF is a suite of services and APIs that collect and surface metrics that provide visibility into server and application performance. WLDF provides a Policies and Actions component to support the automatic scaling of dynamic clusters.  There are two types of scaling supported by WLDF:   Calendar-based scaling — Scaling operations on a dynamic cluster that are executed on a particular date and time. Policy-based scaling — Scaling operations on a dynamic cluster that are executed in response to changes in demand. In this blog, we will focus on policy-based scaling which lets you write policy expressions for automatically executing configured actions when the policy expression rule is satisfied. These policies monitor one or more types of WebLogic Server metrics, such as memory, idle threads, and CPU load. When the configured threshold in a policy is met, the policy is triggered, and the corresponding scaling action is executed.   Example Policy Expression Rule   The following is an example policy expression rule that was used in Automatic Scaling of WebLogic Clusters on Kubernetes:   wls:ClusterGenericMetricRule("cluster-1","com.bea:Type=WebAppComponentRuntime, ApplicationRuntime=OpenSessionApp,*","OpenSessionsCurrentCount","&gt;=",0.01,5,"1 seconds","10 seconds"   This ‘ClusterGenericMetricRule’ smart rule is used to observe trends in JMX metrics that are published through the Server Runtime MBean Server and can be read as:   For the cluster, ‘cluster-1’, WLDF will monitor the OpenSessionsCurrentCount attribute of the WebAppComponentRuntime MBean for the OpenSessionApp application.  If the OpenSessionsCurrentCount is greater than or equal to 0.01 for 5% of the servers in the cluster, then the policy will be evaluated as true. Metrics will be collected at a sampling rate of 1 second and the sample data will be averaged out over the specified 10 second period of time of the retention window.   You can use any of the following tools to configure policies for diagnostic system modules:   WebLogic Server Administration Console WLST REST JMX application   Below is an example configuration of a policy, named ‘myScaleUpPolicy’, shown as it would appear in the WebLogic Server Administration Console:       Example Action   An action is an operation that is executed when a policy expression rule evaluates to true. WLDF supports the following types of diagnostic actions: Java Management Extensions (JMX) Java Message Service (JMS) Simple Network Management Protocol (SNMP) Simple Mail Transfer Protocol (SMTP) Diagnostic image capture Elasticity framework REST WebLogic logging system Script   The WebLogic Server team has an example shell script, scalingAction.sh, for use as a Script Action, which illustrates how to issue a request to the operator’s REST endpoint.  Below is an example screen shot of the Script Action configuration page from the WebLogic Server Administration Console:       Important notes about the configuration properties for the Script Action:   Working Directory and Path to Script configuration entries specify the volume mount path (/shared) to access the WebLogic domain home. The scalingAction.sh script requires access to the SSL certificate of the operator’s endpoint and this is provided through the environment variable ‘INTERNAL_OPERATOR_CERT’.  The operator’s SSL certificate can be found in the ‘internalOperatorCert’ entry of the operator’s ConfigMap weblogic-operator-cm: For example:   #> kubectl describe configmap weblogic-operator-cm -n weblogic-operator   Name:         weblogic-operator-cm Namespace:    weblogic-operator Labels:       weblogic.operatorName=weblogic-operator Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","data":{"externalOperatorCert":"","internalOperatorCert":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0t...   Data ==== internalOperatorCert: ---- LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR3akNDQXFxZ0F3SUJBZ0lFRzhYT1N6QU...   The scalingAction.sh script accepts a number of customizable parameters: •       action - scaleUp or scaleDown (Required) •       domain_uid - WebLogic domain unique identifier (Required) •       cluster_name - WebLogic cluster name (Required) •       kubernetes_master - Kubernetes master URL, default=https://kubernetes •       access_token - Service Account Bearer token for authentication and authorization for access to REST Resources •       wls_domain_namespace - Kubernetes namespace in which the WebLogic domain is defined, default=default •       operator_service_name - WebLogic Operator Service name of the REST endpoint, default=internal-weblogic-operator-service •       operator_service_account - Kubernetes Service Account name for the WebLogic Operator, default=weblogic-operator •       operator_namespace – Namespace in which the WebLogic Operator is deployed, default=weblogic-operator •       scaling_size – Incremental number of WebLogic Server instances by which to scale up or down, default=1   For more information about WLDF and diagnostic policies and actions, see Configuring Policies and Actions in Configuring and Using the Diagnostics Framework for Oracle WebLogic Server. Note: A more detailed description of automatic scaling using WLDF can be found in WebLogic on Kubernetes, Try It! and Automatic Scaling of WebLogic Clusters on Kubernetes.   There are a few key differences between the automatic scaling of WebLogic clusters described in this blog and my previous blog, Automatic Scaling of WebLogic Clusters on Kubernetes:   In the previous blog, as in the earlier release, only scaling of configured clusters was supported. In this blog: To scale the dynamic cluster, we use the WebLogic Kubernetes Operator instead of using a Webhook. To scale the dynamic cluster, we use a Script Action, instead of a REST action. To scale pods, scaling actions invoke requests to the operator’s REST endpoint, instead of the Kubernetes API server. Using a Prometheus Alert Action to Call the Operator's REST Scale API   In addition to using the WebLogic Diagnostic Framework, for automatic scaling of a dynamic cluster, you can use a third party monitoring application like Prometheus.  Please read the following blog for details about Using Prometheus to Automatically Scale WebLogic Clusters on Kubernetes.   What Does the Operator Do in Response to a REST Scaling Request?   When the WebLogic Kubernetes Operator receives a scaling request through its scale REST endpoint, it performs the following actions: Performs an authentication and authorization check to verify that the specified user is allowed to perform the specified operation on the specified resource. Validates that the specified domain, identified by the domainUID, exists. The domainUID is the unique ID that will be used to identify this particular domain. This ID must be unique across all domains in a Kubernetes cluster. Validates that the WebLogic cluster, identified by the clusterName, exists. The clusterName is the name of the WebLogic cluster instance to be scaled. Verifies that the scaling request’s ‘managedServerCount’ value does not exceed the configured maximum cluster size for the specified WebLogic cluster.  For dynamic clusters, ‘MaxDynamicClusterSize’ is a WebLogic attribute that specifies the maximum number of running Managed Server instances allowed for scale up operations.  See Configuring Dynamic Clusters for more information on attributes used to configure dynamic clusters. Initiates scaling by setting the ‘Replicas’ property within the corresponding domain custom resource, which can be done in either:   A clusterStartup entry, if defined for the specified WebLogic cluster. For example:   Spec:   …   Cluster Startup:     Cluster Name:   cluster-1     Desired State:  RUNNING     Env:       Name:     JAVA_OPTIONS       Value:    -Dweblogic.StdoutDebugEnabled=false       Name:     USER_MEM_ARGS       Value:    -Xms64m -Xmx256m     Replicas:   2    …   At the domain level, if a clusterStartup entry is not defined for the specified WebLogic cluster and the startupControl property is set to AUTO For example:     Spec:     Domain Name:  base_domain     Domain UID:   domain1     Export T 3 Channels:     Image:              store/oracle/weblogic:12.2.1.3     Image Pull Policy:  IfNotPresent     Replicas:           2     Server Startup:       Desired State:  RUNNING       Env:         Name:         JAVA_OPTIONS         Value:        -Dweblogic.StdoutDebugEnabled=false         Name:         USER_MEM_ARGS         Value:        -Xms64m -Xmx256m       Server Name:    admin-server     Startup Control:  AUTO   Note: You can view the full WebLogic Kubernetes domain resource with the following command: #> kubectl describe domain <domain resource name> In response to a change to the ‘Replicas’ property in the Custom Resource Domain, the operator will increase or decrease the number of pods (Managed Servers) to match the desired replica count. Wrap Up The WebLogic Server team has developed an Oracle WebLogic Kubernetes Operator, based on the Kubernetes Operator pattern, for integrating WebLogic Server in a Kubernetes environment.  The operator is used to manage the life cycle of a WebLogic domain and, more specifically, to scale a dynamic cluster.  Scaling a WebLogic dynamic cluster can be done, either on-demand or automatically, using either the WebLogic Diagnostic Framework or third party monitoring applications, such as Prometheus.  In summary, the advantages of using WebLogic dynamic clusters over configured clusters in a Kubernetes cluster are:   Managed Server configuration is based on a single server template. When additional server capacity is needed, new server instances can be added to the cluster without having to manually configure them individually. Unlike configured clusters, scaling up of dynamic clusters is not restricted to the set of servers defined in the cluster but can be increased based on runtime demands. I hope you’ll take the time to download and take the Oracle WebLogic Kubernetes Operator for a spin and experiment with the automatic scaling feature for dynamic clusters. Stay tuned for more blogs on future features that are being added to enhance the Oracle WebLogic Kubernetes Operator.

Overview A WebLogic Server cluster consists of multiple Managed Server instances running simultaneously and working together to provide increased scalability and reliability.  WebLogic Server supports...

Using Prometheus to Automatically Scale WebLogic Clusters on Kubernetes

Elasticity (scaling up or scaling down) of a WebLogic Server cluster provides the benefits of being able to manage resources based on demand and enhances the reliability of customer applications while managing resource costs. There are different ways to trigger the automatic scaling of WebLogic Server clusters in a Kubernetes environment. The architecture of the WebLogic Server Elasticity component as well as a detailed explanation of how to scale up a WebLogic cluster using a WebLogic Diagnostic Framework (WLDF) policy can be found in the Automatic Scaling of WebLogic Clusters on Kubernetes blog. In this demo, we demonstrate another way to automatically scale a WebLogic cluster on Kubernetes, by using Prometheus. Since Prometheus has access to all available WebLogic metrics data, user has flexibility to use any of it to specify the rules for scaling. Based on collected metrics data and configured alert rule conditions, Prometheus’s Alert Manager will send an alert to trigger the desired scaling action and change the number of running Managed Servers in the WebLogic Server cluster. We use the WebLogic Monitoring Exporter to scrape runtime metrics for specific WebLogic Server instances and feed them to Prometheus. We also implement a custom notification integration using the webhook receiver, a user-defined REST service that is triggered when a scaling alert event occurs. After the alert rule matches the specified conditions, the Prometheus Alert Manager sends an HTTP request to the URL specified as a webhook to request the scaling action. For more information about the webhook used in the sample demo, see adnanh/webhook/. In this blog, you will learn how to configure Prometheus, Prometheus Alert Manager, and a webhook to perform automatic scaling of WebLogic Server instances running in Kubernetes clusters. This picture shows all the components running in the pods in the Kubernetes environment: The WebLogic domain, running in a Kubernetes cluster, consists of: An Administration Server (AS) instance, running in a Docker container, in its own pod (POD 1). A WebLogic Server cluster, composed of a set of Managed Server instances, in which each instance is running in a Docker container in its own pod (POD 2 to POD 5). The WebLogic Monitoring Exporter web application, deployed on a WebLogic Server cluster. Additional components, running in a Docker container, in their own pod are: Prometheus Prometheus Alert Manager WebLogic Kubernetes Operator Webhook server   Installation and Deployment of the Components in the Kubernetes Cluster Follow the installation instructions to create the WebLogic Kubernetes Operator and domain deployments. In this blog, we will be using the following parameters to create the WebLogic Kubernetes Operator and WebLogic domain: 1. Deploy the WebLogic Kubernetes Operator (create-weblogic-operator.sh) In create-operator-inputs.yaml:   serviceAccount: weblogic-operator targetNamespaces: domain1 namespace: weblogic-operator weblogicOperatorImage: container-registry.oracle.com/middleware/weblogic-kubernetes-operator:latest weblogicOperatorImagePullPolicy: IfNotPresent externalRestOption: SELF_SIGNED_CERT externalRestHttpsPort: 31001 externalSans: DNS:slc13kef externalOperatorCert: externalOperatorKey: remoteDebugNodePortEnabled: false internalDebugHttpPort: 30999 externalDebugHttpPort: 30999 javaLoggingLevel: INFO 2. Create and start a domain (create-domain-job.sh) In create-domain-job-inputs.yaml:   domainUid: domain1 managedServerCount: 4 managedServerStartCount: 2 namespace: weblogic-domain adminPort: 7001 adminServerName: adminserver startupControl: AUTO managedServerNameBase: managed-server managedServerPort: 8001 weblogicDomainStorageType: HOST_PATH weblogicDomainStoragePath: /scratch/external-domain-home/pv001 weblogicDomainStorageReclaimPolicy: Retain weblogicDomainStorageSize: 10Gi productionModeEnabled: true weblogicCredentialsSecretName: domain1-weblogic-credentials exposeAdminT3Channel: true adminNodePort: 30701 exposeAdminNodePort: true namespace: weblogic-domain loadBalancer: TRAEFIK loadBalancerWebPort: 30305 loadBalancerDashboardPort: 30315 3. Run this command to identify the admin NodePort to access the console :   kubectl -n weblogic-domain  describe service domain1-adminserver weblogic-domain – is the namespace where the WebLogic domain pod is deployed. Previous blog entries have described how to start and run WebLogic Server instances in Kubernetes with the WebLogic Monitoring Exporter deployed on Managed Servers running in the cluster. Access the WebLogic Server Administration Console at this URL, http://[hostname]:30701/console, using the WebLogic credentials, “weblogic/welcome1”. In our example, we setup an alert rule based on the number of the opened sessions produced by this web application, “testwebapp.war”. Deploy the testwebapp.war application and WebLogic Monitoring Exporter “wls-exporter.war” to DockerCluster. Review the DockerCluster NodePort for external access:   kubectl -n weblogic-domain  describe service domain1-dockercluster-traefik To make sure that the WebLogic Monitoring Exporter is deployed and running, access the application with a URL like the following: http://[hostname]:30305/wls-exporter/metrics You will be prompted for the WebLogic user credentials that are required to access the metrics data, weblogic/welcome1. The metrics page will show the metrics configured for the WebLogic Monitoring Exporter:     Make sure that the alert rule you want to setup in the Prometheus Alert Manager matches the metrics configured for the WebLogic Exporter. Here is an example of the alert rule we used: if sum(webapp_config_open_sessions_current_count{webapp=”testwebapp”) > 15 ; The metric used, ‘webapp_config_open_sessions_current_count’, should be listed on the metric’s web page.   Setting Up the Webhook for Alert Manager We used this webhook application in our example. To build the Docker image, create this directory structure: - apps -scripts -webhooks 1. Copy the webhook application executable file to the ‘apps’ directory and copy the scalingAction.sh script to ‘scripts’ directory. Create a scaleUpAction.sh file in the ‘scripts’ directory and edit it with the code listed below:   #!/bin/bash echo scale up action >> scaleup.log MASTER=https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT echo Kubernetes master is $MASTER source /var/scripts/scalingAction.sh --action=scaleUp --domain_uid=domain1 --cluster_name=DockerCluster --kubernetes_master=$MASTER --wls_domain_namespace=domain1 2. Create a Docker file for the webhook, Docker.webhook, as suggested:   FROM store/oracle/serverjre:8 COPY apps/webhook /bin/webhook COPY webhooks/hooks.json /etc/webhook/ COPY scripts/scaleUpAction.sh /var/scripts/ COPY scripts/scalingAction.sh /var/scripts/ CMD ["-verbose", "-hooks=/etc/webhook/hooks.json", "-hotreload"] ENTRYPOINT ["/bin/webhook"] 3. Create hooks.json file in the webhooks directory, for example:   [ { "id": "scaleup", "execute-command": "/var/scripts/scaleUpAction.sh", "command-working-directory": "/var/scripts", "response-message": "scale-up call ok\n" } ] 4. Build the ‘webhook’ Docker image:   docker rmi webhook:latest docker build -t webhook:latest -f Dockerfile.webhook .   Deploying Prometheus, Alert Manager, and Webhook We will run Prometheus, the Alert Manager and the webhook pods under the namespace ‘monitoring’. Execute the following command to create a ‘monitoring’ namespace:   kubectl create namespace monitoring To deploy a Prometheus instance in Kubernetes, create a Prometheus configuration file, prometheus-kubernetes.yml. A sample file is provided here. The example of the Prometheus configuration file specifies: -    weblogic/welcome1 as the user credentials -    Five seconds as the interval between updates of WebLogic Server metrics -    32000 as the external port to access the Prometheus dashboard -    scaling rules: ALERT scaleup if sum(webapp_config_open_sessions_current_count{webapp=”testwebapp”}) > 15 ANNOTATIONS { summary = "Scale up when current sessions is greater than 15", description = "Firing when total sessions active greater than 15" }         -   Alert Manager is configured to listen port 9093 As required, you can change these values to reflect your specific environment and configuration. You can also change the Alert Rule by constructing Prometheus-defined queries matching your elasticity needs. To generate alerts, we need to deploy the Prometheus Alert Manager as a separate pod, running in the Docker container. In our provided sample Prometheus Alert Manager configuration file, we use the webhook: Update the ‘INTERNAL_OPERATOR_CERT’ property from the webhook-deployment.yaml file with the value of the ‘internalOperatorCert’ property from the generated weblogic-operator.yaml file, used for WebLogic Kubernetes Operator deployment, for example:   Start the webhook, Prometheus, and the Alert Manager to monitor the Managed Server instances:   kubectl apply -f  alertmanager-deployment.yaml kubectl apply –f prometheus-deployment.yaml kubectl apply –f webhook-deployment.yaml Verify that all the pods are started: Check that Prometheus is monitoring all Managed Server instances by browsing to http://[hostname]:32000. Examine the Insert metric at cursor pull-down menu. It should list the metric names based on the current configuration of the WebLogic Monitoring Exporter web application.   You can check the Prometheus Alert Setting by accessing this URL, http://[hostname]:32000/alerts: It should show the configured rule, listed in the prometheus-deployment.yaml configuration file. Auto Scaling of WebLogic Clusters in K8s In this demo, we configured each WebLogic Server cluster to have two running Managed Server instances, with a total number of Managed Servers equal to four. You can modify the values of these parameters, configuredManagedServerCount and initialManagedServerReplicas, in the create-domain-job-inputs.yaml file, to reflect your desired number of Managed Servers running in the cluster and maximum limit of allowed replicas. Per our sample file configuration, initially we have only two Managed Servers pods started. Let’s check all the running pods now: Per our configuration in the Alert Rule, the scale up will happen when the number of open session for the application ‘testwebapp’ on the cluster is more than 15.  Let’s invoke the application URL 17 times using curl.sh:   #!/bin/bash COUNTER=0 MAXCURL=17 while [ $COUNTER -lt $MAXCURL ]; do OUTPUT="$(curl http:/$1:30305/testwebapp/)" if [ "$OUTPUT" != "404 page not found" ]; then echo $OUTPUT let COUNTER=COUNTER+1 sleep 1 fi done Issue the command:   . ./curl.sh [hostname] When the sum of open sessions for the “testwebapp” application becomes more than 15, Prometheus will fire an alert via the Alert Manager. We can check the current alert status by accessing this URL, http://[hostname]:32000/alert To verify that the Alert Manager sent the HTTP POST to the webhook, check the webhook pod log: When the hook endpoint is invoked, the command specified by the “execute-command” property is executed, which in this case is the shell script, /var/scripts/scaleUpAction.sh. The scaleUpAction.sh script passes the parameters for the scalingAction.sh script, provided by the WebLogic Kubernetes Operator. The scalingAction.sh script issues a request to the Operator Service REST URL for scaling. To verify the scale up operation, let’s check the number of running Managed Server pods. It should be increased to a total of three running pods: Summary In this blog, we demonstrated how to use the Prometheus integration with WebLogic Server to trigger the automatic scaling of WebLogic Server clusters in a Kubernetes environment. The WebLogic Server cluster can be automatically scaled by increasing (or decreasing) the number of pods based on a very comprehensive set of WebLogic domain-specific (custom) metrics monitored and analyzed by Prometheus. Our sample demonstrates that in addition to being a great monitoring tool, Prometheus can easily be configured for WebLogic Server cluster scaling decisions.  

Elasticity (scaling up or scaling down) of a WebLogic Server cluster provides the benefits of being able to manage resources based on demand and enhances the reliability of customer applications while...

Ensuring high level of performance with WebLogic JDBC

Written by Joseph Weinstein.   In this post, you will find some common best practices aimed at ensuring high levels of performance with WebLogic JDBC.   Use WebLogic DataSources for connection pooling of JDBC connections Making a real DBMS connection is expensive and slow, so you should use our datasources to retain and re-use connections. The ideal mode for using pooled connections is to use them as quickly and briefly as possible, getting them just when needed, and closing them (returning them to the pool) as soon as possible. This maximizes concurrency.  (It is crucial that the connection is a method-level object, not shared between application threads, and that it *is* closed, no matter what exit path the application code takes, else the pool could be leaked dry, all connections taken out and abandoned. See a best-practices code example farther down in this article. The long-term hardening and integration of WebLogic Datasources with applications and other WebLogic APIs make them much the preferred choice to UCP or third-party options.)   Use Oracle JDBC thin driver (Type 4) rather than OCI driver (Type 2) The Oracle JDBC thin driver is lightweight (easy to install and administrate), platform-independent (entirely written in Java), and provides slightly higher performance than the JDBC OCI (Oracle Call Interface) driver.  The thin driver does not require any additional software on the client side.  Oracle JDBC FAQ stipulates that the performance benefit with the thin driver is not consistent and that the OCI driver can even deliver better performances in some scenarios. Using OCI in WebLogic carries the danger that any bug in the native library can take down an entire WebLogic server.WebLogic officially no longer supports using the driver in the OCI mode.   Use PreparedStatements objects rather than plain Statements  With PreparedStatements, the compiled SQL query plans will be kept in the DBMS cache, only parsed once and re-used thereafter.   Use/configure the WebLogic Datasource’s statement cache size wisely. The datasource can actually cache and allow you to transparently re-use a Prepared/CallableStatement made from a given pooled connection. The pool’s statement cache size (default=10) determines how many. This may take some memory but is usually worth the performance gain. Note well though that the cache size is purged in the least recently used policy so if your app(s) that use a datasource typically make 30 distinct prepared statements, each next request would put the new one in the cache and kick out one used 10 statements ago, and this would thrash the cache, with no statement ever surviving long enough to be re-used. The console makes several statement cache statistics available to allow you to size the cache to service all your statements, but if memory becomes a huge issue, it may be better to set the cache size to zero. When using the Oracle JDBC driver, also consider using its statement caching as a lower-level alternative to WebLogic caching. There have been times when the driver uses significant memory per open statement, such as if cached by WebLogic, but if cached at the driver level instead, the driver knows it can share and minimize this memory. To use driver-level statement caching instead, make sure the WebLogic statement cache size is zero, and add these properties to the list of driver properties for the datasource: implicitCachingEnabled=true     and    maxStatements=XXX  where XXX is ideally a number of statements enough to cover all your common calls. Similarly to the WebLogic cache size, a too-small number might be useless or worse. Observe your memory usage after the server has run under full load for a while.   Close all JDBC resources ASAP, inline, and for safety, verify so in a finally block This includes Lobs, ResultSets, Statements, and Connections objects to maximize memory and avoid certain DBMS-side resource issues.  By spec, the Connection.close() should close all sub-objects from it, and the WebLogic version of close() intends to do that while putting the actual connection back into the pool, but some objects may have different implementations in different drivers that won’t allow WebLogic to release everything. JDBC objects like Lobs not properly closed can lead to this error: java.sql.SQLException: ORA-01000: maximum open cursors exceeded.   If you don't explicitly close Statements and ResultSets right away, cursors may accumulate and exceed the maximum number allowed in your DB before the Connection is closed.    Here is a code example for WebLogic JDBC best practices:   Public void myTopLevelJDBCMethod() {     Connection c = null; // defined as a method-level object, not accessible or kept where other threads can use it.       … do all pre-JDBC stuff…       // The try block, in which all JDBC for this method (and sub-methods) will be done     Try {       // Get the connection directly, fresh from a WLS datasource       c = myDatasource.getConnection();         … do all your JDBC… You can pass the connection to sub-methods, but they should not keep it,       or expect it or any of the objects gotten from it to be open/viable after the end of the method…       doMyJDBCSubTaskWith( c );          c.close(); // close the connection as soon as all JDBC is done   c = null;  // so the finally block knows it’s been closed if it was ever obtained.         .. do whatever else that may remain that doesn’t need JDBC. I have seen *huge* concurrency improvements by       closing the connection ASAP before doing any non-JDBC post-processing of the data etc.          } catch (Exception e) {       .. do what you want/need, if you need a catch-block, but *always* have the finally block:     } finally {       // If we got here somehow without closing c, do it now, without fail, as the first thing in the finally block so it always happens       If (c != null) try {c.close();} catch (Exception ignore){}       … do whatever else you want in the finally block     } }   Set The Datasource Shrink frequency to 0 for fastest connection availability A datasource can be configured to vary its count of real connections, closing an unneeded portion (above the minimum capacity) when there is insufficient load currently, and it will repopulate itself as/when needed. This will impose slowness on apps during the uptick in load, while new replacement connections are made. By setting the shrink frequency to zero, the datasource will keep all working connections indefinitely, ready. This is sometimes a tradeoff in the DBMS, if there are too many idle sessions…   Set the datasource test frequency to something infrequent or zero The datasource can be configured to periodically test any connections that are currently unused, idle in the pool, replacing bad ones, independently of any application load. This has some benefits, such as keeping the connections looking busy enough for firewalls and DBMSes that might otherwise silently kill them for inactivity. However, it is overhead in WLS, and is mostly superfluous if you have test-connections-on-reserve as you should.   Consider skipping the SQL-query connection test on reserve sometimes You should always explicitly enable ‘test connections on reserve’ because even with Active GridLink information about DBMS health, individual connections may go bad, unnoticed. The only way to ensure a connection you’re getting is good is to have the datasource test it just before you get it. However, there may be cases where this connection test every time is too expensive, either because it adds too much time to the short user use-case, or it burdens the DBMS too much. In these cases, if it is somewhat tolerable that an application occasionally gets a bad connection, there is a datasource option ‘seconds to trust an idle connection’ (default 10 seconds) which means that if a connection in the pool has been tested successfully, or previously used by an application successfully, within that number of seconds, we will trust the connection, and give it to the requester without testing it. In a heavy-load, quick-turnover environment this can safely and completely avoid the explicit overhead of testing. For maximal safety however, set ‘seconds to trust an idle connection’ explicitly to zero.   Consider making the test as lightweight as possible If the datasource’s ‘Test Table’ parameter is set, the pool will test a connection by doing a ‘select  count(*) from’ from that table. DUAL is the traditional choice for Oracle. There are options to use the JDBC isValid() call instead, which for *some* drivers is faster. When using the Oracle driver you can set the ‘test table’ to SQL ISVALID. The Oracle dbping() is an option, enabled by the ‘test table’ being set to SQL PINGDATABAS, which checks the to-DBMS net connectivity without actually invoking any user-level DBMS functionality. These are faster, but there are rare cases where the user session functionality is broken, even if the net connectivity is still good. For XA connections, there is a heavier tradeoff. A test table query will be done in its own XA transaction, which is more overhead, but this is useful sometimes because catches and works around some session state problems that would otherwise cause the next user XA transaction to fail. For maximal safety, do a quick real query, such as by setting the test table to SQL SELECT 1 FROM DUAL.   Pinned-to-Thread not recommended Disabled by default, this option can improve performance by transparently assigning pool connections to specific WLS threads. This eliminates contention between threads while accessing a datasource.  However, this parameter should be used with great care because the connection pool maximum capacity is ignored when pinned-to-thread is enabled. Each thread (numbering possibly in the several hundreds) will need/get its own connection, and no shrinking can apply to that pool. That being said, pinned-to-thread is not recommended, for historical/trust reasons. It has not gotten the historical usage, testing, and hardening that the rest of WebLogic pooling has gotten.   Match the Maximum Thread Constraint property with the maximum capacity of database connections This property (See Environment Work Manager in the console) will set a maximum number of possible concurrent application threads/executions. If your applications can run concurrently, unbounded in number except for this WebLogic limit, the maximum capacity of the datasource should match this thread-count so none of your application threads have to wait at an empty pool until some other thread returns a connection.   Visit Tuning Data Source Connection Pools and Tuning Data Sources for additional parameters tuning in JDBC data sources and connection pools to improve system performance with Weblogic Server, and Performance Tuning Your JDBC Application for application-specific design and configuration.     

Written by Joseph Weinstein.   In this post, you will find some common best practices aimed at ensuring high levels of performance with WebLogic JDBC.   Use WebLogic DataSources for connection pooling of...

Processing the Oracle WebLogic Server Kubernetes Operator Logs using Elastic Stack

  Oracle has been working with the WebLogic community to find ways to make it as easy as possible for organizations using WebLogic Server to run important workloads and to move those workloads into the cloud. One aspect of that effort is the delivery of the Oracle WebLogic Server Kubernetes Operator. In this article we will demonstrate a key feature that assists with the management of WebLogic domains in a Kubernetes environment: the ability to publish and analyze logs from the operator using products from the Elastic Stack.  What Is the Elastic Stack? The Elastic Stack (ELK) consists of several open source products, including Elasticsearch, Logstash, and Kibana. Using the Elastic Stack with your log data, you can gain insight about your application's performance in near real time. Elasticsearch is a scalable, distributed and RESTful search and analytics engine based on Lucene. It provides a flexible way to control indexing and fast search over various sets of data. Logstash is a server-side data processing pipeline that can consume data from several sources simultaneously, transform it, and route it to a destination of your choice. Kibana is a browser-based plug-in for Elasticsearch that you use to visualize and explore data that has been collected. It includes numerous capabilities for navigating, selecting, and arranging data in dashboards. A customer who uses the operator to run a WebLogic Server cluster in a Kubernetes environment will need to monitor the operator and servers. Elasticsearch and Kibana provide a great way to do it. The following steps explain how to set this up. Processing Logs Using ELK In this example, the operator and the Logstash agent are deployed in one pod, and Elasticsearch and Kibana are deployed as two independent pods in the default namespace. We will use a memory-backed volume that is shared between the operator and Logstash containers and that is used to store the logs. The operator instance places the logs into the shared volume, /logs. Logstash collects the logs from the volume and transfers the filtered logs to Elasticsearch. Finally, we will use Kibana and its browser-based UI to analyze and visualize the logs. Operator and ELK integration To enable ELK integration with the operator, first we need to set the elkIntegrationEnabled parameter in the create-operator-inputs.yaml file to true. This causes Elasticsearch, Logstash and Kibana to be installed, and Logstash to be configured to export the operator's logs to Elasticsearch. Then simply follow the installation instructions to install and start the operator. To verify that ELK integration is activated, check the output produced by the following command: $ . ./create-weblogic-operator.sh -i create-operator-inputs.yaml This command should print the following information for ELK: Deploy ELK... deployment "elasticsearch" configured service "elasticsearch" configured deployment "kibana" configured service "kibana" configured To ensure that all three deployments are up and running, perform these steps: Check that the Elasticsearch and Kibana pods are deployed and started (note that they run in the default Kubernetes namespace): $ kubectl get pods The following output is expected: Verify that the operator pod is deployed and running. Note that it runs in the weblogic-operator namespace: $ kubectl -n weblogic-operator get pods The following output is expected: Check that the operator and Logstash containers are running inside the operator’s pod: $ kubectl get pods -n weblogic-operator --output json    | jq '.items[].spec.containers[].name' The following output is expected:   Verify that the Elasticsearch pod has started: $ kubectl exec -it elasticsearch-3938946127-4cb2s /bin/bash $ curl   "http://localhost:9200" $ curl  “http://localhost:9200/_cat/indices?v” We get the following indices if Elasticstash was successfully started: If Logstash is not listed, then you might check the Logstash log output: $ kubectl logs weblogic-operator-501749275-nhjs0 -c logstash   -n weblogic-operator If there are no errors in the Logstash log, then it is possible that the Elasticsearch pod has started after the Logstash container. If that is the case, simply restart Logstash to fix it. Using Kibana Kibana provides a web application for viewing logs. Its Kubernetes service configuration includes a NodePort so that the application can be accessed outside of the Kubernetes cluster. To find its port number, run the following command: $ kubectl describe service kibana This should print the service NodePort information, similar to this: From the description of the service in our example, the NodePort value is 30911. Kibana's web application can be accessed at the address http://[NODE_IP_ADDRESS]:30911. To verify that Kibana is installed correctly and to check its status, connect to the web page at http://[NODE_IP_ADDRESS]:30911/status. The status should be Green. The next step is to define a Kibana index pattern. To do this, click Discover in the left panel. Notice that the default index pattern is logstash-*, and that the default time filter field name is @timestamp. Click Create. The Management page displays the fields for the logstash* index: The next step is to customize how the operator logs are presented. To configure the time interval and auto-refresh settings, click the upper-right corner of the Discover page, double-click the Auto-refresh tab, and select the desired interval. For example, 10 seconds. You can also set the time range to limit the log messages to those generated during a particular interval: Logstash is configured to split the operator log records into separate fields. For example: method: dispatchDomainWatch level: INFO log: Watch event triggered for WebLogic Domain with UID: domain1 thread: 39 timeInMillis: 1518372147324 type: weblogic-operator path: /logs/operator.log @timestamp: February 11th 2018, 10:02:27.324 @version: 1 host: weblogic-operator-501749275-nhjs0 class: oracle.kubernetes.operator.Main _id: AWGGCFGulCyEnuJh-Gq8 _type: weblogic-operator _index: logstash-2018.02.11 _score: You can limit the fields that are displayed. For example, select the level, method, and log fields, then click add. Now only those fields will be shown. You can also use filters to display only those log messages whose fields match an expression. Click Add a filter at the top of the Discover page to create a filter expression. For example, choose method, is one of, and onFailure. Kibana will display all log messages from the onFailure methods: Kibana is now configured to collect the operator logs. You can use its browser-based viewer to easily view and analyze the data in those logs. Summary In this blog, you learned about the Elastic Stack and the Oracle WebLogic Server Kubernetes Operator integration architecture, followed by a detailed explanation of how to set up and configure Kibana for interacting with the operator logs. You will find its capabilities, flexibility, and rich feature set to be an extremely valuable asset for monitoring WebLogic domains in a Kubernetes environment.    

  Oracle has been working with the WebLogic community to find ways to make it as easy as possible for organizations using WebLogic Server to run important workloads and to move those workloads into the...

T3 RMI Communication for WebLogic Server Running on Kubernetes

Overview Oracle WebLogic Server supports Java EE and includes several vendor-specific enhancements. It has two RMI implementations and, beyond the standard Java EE-based IIOP RMI, WebLogic Server has a proprietary RMI protocol called T3. This blog describes the configuration aspects of generic RMI that also apply to T3, and also some T3-specific aspects for running WebLogic RMI on Kubernetes. Background T3 RMI is a proprietary WebLogic Server high performance RMI protocol and is a major communication component for WebLogic Server internally, and also externally for services like JMS, EJB, OAM, and many others. WebLogic Server T3 RMI configuration has evolved. It starts with a single multi-protocol listen port and listen address on WebLogic Server known as the default channel. We enhanced the default channel by adding a network access point layer, which allows users to configure multiple ports, as well as different protocols for each port, known as custom channels. When WebLogic Server is running on Kubernetes, the listen port number of WebLogic Server may or may not be the same as the Kubernetes exposed port number. For WebLogic Server running on Kubernetes, a custom channel allows us to map these two port numbers. The following table lists key terms that are used in this blog and provides links to documentation that gives more details. TERMINOLOGY   Listen port The TCP/IP port that WebLogic Server physically binds to. Public port Public port The port number that the caller uses to define the T3 URL. Usually it is the same as the listen port, unless the connection goes through “port mapping Port mapping An application of network address translation (NAT) that redirects a communication request from one address and port number combination to another. See port mapping. Default channel Every WebLogic Server domain has a default channel that is generated automatically by WebLogic Server. See definition. Custom channel Used for segregating different types of network traffic. ServerTemplateMBean Also known as the default channel. Learn more. NetworkAccessPointMBean Also known as a custom channel. Learn more. WebLogic cluster communication WebLogic Server instances in a cluster communicate with one another using either of two basic network technologies: multicast and unicast. Learn more about multicast and unicast. WebLogic transaction coordinator WebLogic Server transaction manager that serves as coordinator of the transaction Kubernetes service See Kubernetes concepts https://kubernetes.io/docs/concepts/services-networking/service/, which defines Kubernetes back end and NodePort. Kubernetes pod IP address Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on. We give every pod its own cluster-private-IP address so you do not need to explicitly create links between pods or mapping container ports to host ports. Learn more. ClusterMBean See ClusterBroadcastChannel. WebLogic Server Listen Address WebLogic Server supports two cluster messaging protocols: multicast and unicast. The WebLogic Server on Kubernetes certification was done using the Flannel network fabric. Currently, we only certify unicast communication. By default WebLogic Server will use the default channel for unicast communication. Users can override it by setting a custom channel on the associated WebLogic Server ClusterMBean. As part of the unicast configuration in a WebLogic cluster, a designated listen address and port is required for each WebLogic cluster member so that they can locate each other. By default, the default channel or custom channel has a null listen address and is assigned at run time as 0.0.0.0. In a multinode Kubernetes cluster environment, neither 0.0.0.0 nor localhost will allow other cluster members from different nodes to discover each other. Instead, users can use the Kubernetes pod IP address that the WebLogic Server instance is running on. TCP Load Balancing In general, WebLogic T3 is TCP/IP-based, so it can support TCP load balancing when services are homogeneous, such as in a Kubernetes service with multiple back ends. In WebLogic Server some subsystems are homogeneous, such as JMS and EJB. For example, a JMS front end subsystem can be configured in a WebLogic cluster in which remote JMS clients can connect to any cluster member. By contrast, a JTA subsystem cannot safely use TCP load balancing in transactions that span across multiple WebLogic domains that, in turn, extend beyond a single Kubernetes cluster. The JTA transaction coordinator must establish a direct RMI connection to the server instance that is chosen as the subcoordinator of the transaction when that transaction is either committed or rolled back. The following figure shows a WebLogic transaction coordinator using the T3 protocol to connect to a subcoordinator. The WebLogic transaction coordinator cannot connect to the chosen subcoordinator due to the TCP load balancing. Figure 1: Kubernetes Cluster with Load Balancing Service To support cluster communication between the WebLogic transaction coordinator and the transaction subcoordinator across a Kubernetes environment, the recommended configuration is to have an individual NodePort service defined for each default channel and custom channel. Figure 2: Kubernetes Cluster with One-on-One Service Depending on the application requirements and the WebLogic subsystem used, TCP load balancing might or might not be suitable. Port Mapping and Address Mapping WebLogic Server supports two styles of T3 RMI configuration. One is defined by means of the default channel (see ServerTemplateMBean), and the other is defined by means of the custom channel (see NetworkAccessPointMBean). When running WebLogic Server in Kubernetes, we need to give special attention to the port mapping. When we use NodePort to expose the WebLogic T3 RMI service outside the Kubernetes cluster, we need to map the NodePort to the WebLogic Server listen port. If the NodePort is the same as the WebLogic Server listen port, then users can use the WebLogic Server default channel. Otherwise, users must configure a custom channel that defines a "public port" that matches the NodePort nodePort value, and a “listen port” that matches the NodePort port value. The following graph shows a nonworking NodePort/default channel configuration and a working NodePort/custom channel configuration: Figure 3: T3 External Clients in K8S The following table describes the properties of the default channel versus the corresponding ones in the custom channel:   Default Channel (ServerTemplateMBean) Custom Channel (NetworkAccessPointMBean) Multiple protocol support (T3, HTTP, SNMP, LDAP, and more) Yes No RMI over HTTP tunneling Yes (disable by default) Yes (disable by default) Port mapping No Yes Address Yes Yes Examples of WebLogic T3 RMI configurations WebLogic Server supports several ways to configure T3 RMI. The following examples show the common ones. Using the WebLogic Server Administration Console The following console page shows a WebLogic Server instance called AdminServer with a listen port of 9001 on a null listen address and with no SSL port. Because this server instance is configured with the default channel, port 9001 will support T3, http, iiop, snmp, and ldap. Figure 4: T3 RMI via ServerTemplateMBean on WebLogic console The following console page shows a custom channel with a listen port value of 7010, a null listen address, and a mapping to public port 30010. By default, the custom channel supports T3 protocol. Figure 5: T3 RMI via NetworkAccessPointMBean on WebLogic Console Using WebLogic RESTful management services The following shell script will create a custom channel with listen port ${CHANNEL_PORT} and a paired public port ${CHANNEL_PUBLIC_PORT}. #!/bin/sh HOST=$1 PORT=$2 USER=$3 PASSWORD=$4 CHANNEL=$5 CHANNEL_PORT=$6 CHANNEL_PUBLIC_PORT=$7 echo "Rest EndPoint URL http://${HOST}:${PORT}/management/weblogic/latest/edit" if [ $# -eq 0 ]; then echo "Please specify HOST, PORT, USER, PASSWORD CHANNEL CHANNEL_PORT CHANNEL_PUBLIC_PORT" exit 1 fi # Start edit curl -j --noproxy '*' --silent \ --user $USER:$PASSWORD \ -H X-Requested-By:MyClient \ -H Accept:application/json \ -H Content-Type:application/json \ -d "{}" \ -X POST http://$HOST:$PORT/management/weblogic/latest/edit/changeManager/startEdit # Create curl -j --noproxy '*' --silent \ --user $USER:$PASSWORD \ -H X-Requested-By:MyClient \ -H Accept:application/json \ -H Content-Type:application/json \ -d "{ name: '${CHANNEL}' }" \ -X POST http://$HOST:$PORT/management/weblogic/latest/edit/Servers/myServer/networkAccessPoints curl -j --noproxy '*' --silent \ --user $USER:$PASSWORD \ -H X-Requested-By:MyClient \ -H Accept:application/json \ -H Content-Type:application/json \ -d "{ listenPort: ${CHANNEL_PORT}, publicPort: ${CHANNEL_PUBLIC_PORT} }" \ -X POST http://$HOST:$PORT/management/weblogic/latest/edit/Servers/myServer/networkAccessPoints/${CHANNEL} curl -j --noproxy '*' --silent \ --user $USER:$PASSWORD \ -H X-Requested-By:MyClient \ -H Accept:application/json \ -H Content-Type:application/json \ -d "{}" \ -X POST http://$HOST:$PORT/management/weblogic/latest/edit/changeManager/activate Using a WLST script The following WLST script creates a custom T3 channel named t3Channel that has a listen port listen_port and a paired public port public_port. host = sys.argv[1] port = sys.argv[2] user_name = sys.argv[3] password = sys.argv[4] listen_port = sys.argv[5] public_port = sys.argv[6] print('custom host : [%s]' % host); print('custom port : [%s]' % port); print('custom user_name : [%s]' % user_name); print('custom password : ********'); print('public address : [%s]' % public_address); print('channel listen port : [%s]' % listen_port); print('channel public listen port : [%s]' % public_port); connect(user_name, password, 't3://' + host + ':' + port) edit() startEdit() ls() cd('/') cd('Servers') cd('myServer') create('t3Channel','NetworkAccessPoint') cd('NetworkAccessPoints/t3Channel') set('Protocol','t3') set('ListenPort',int(listen_port)) set('PublicPort',int(public_port)) print('Channel t3Channel added ...') activate() disconnect() Summary WebLogic Server uses RMI communication using the T3 protocol to communicate between WebLogic Servers instances and with other Java programs and clients. When WebLogic Server runs in a Kubernetes cluster, there are special considerations and configuration requirements that need to be taken into account to make the RMI communication work. This blog describes how to configure WebLogic Server and Kubernetes so that RMI communication from outside the Kubernetes cluster can successfully reach the WebLogic Server instances running inside the Kubernetes cluster. For many WebLogic Server features using T3 RMI, such as EJBs, JMS, JTA, and WLST, we support clients inside and outside the Kubernetes cluster. In addition, we support both a single WebLogic domain in a multinode Kubernetes cluster, and multiple WebLogic domains in a multinode Kubernetes cluster as well.

Overview Oracle WebLogic Server supports Java EE and includes several vendor-specific enhancements. It has two RMI implementations and, beyond the standard Java EE-based IIOP RMI, WebLogic Server has...

The WebLogic Server

WebLogic Server Certification on Kubernetes

We are pleased to announce the certification of Oracle WebLogic Server on Kubernetes! As part of this certification, we are releasing a sample on GitHub to create an Oracle WebLogic Server 12.2.1.3, 12.2.1.4, and 14.1.1.0 domain images running on Kubernetes. We are also publishing a series of blogs that describe in detail the WebLogic Server configuration and feature support as well as best practices.  A video of a WebLogic Server domain running in Kubernetes can be seen at WebLogic Server on Kubertnetes Video. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It supports a range of container tools, including Docker.  Oracle WebLogic Server configurations running in Docker containers can be deployed and orchestrated on Kubernetes platforms. For additional information about Docker certification with Oracle WebLogic Server, see My Oracle Support Doc ID 2017945.1.  Support for running WebLogic Server domains on Kubernetes platforms other than on Oracle Linux with a network fabric other than Flannel, see My Oracle Support Doc ID 2349228.1.  For the most current information on supported configurations, see the Oracle Fusion Middleware Supported System Configurations page on Oracle Technology Network. This certification enables users to create clustered and nonclustered Oracle WebLogic Server domain configurations, including both development and production modes, running on Kubernetes clusters.  This certification includes support for the following: Running one or more WebLogic domains in a Kubernetes cluster Single or multiple node Kubernetes clusters WebLogic managed servers in clustered and nonclustered configurations WebLogic Server Configured and Dynamic clusters Unicast WebLogic Server cluster messaging protocol Load balancing HTTP requests using Træfik, Voyager as Ingress controllers on Kubernetes clusters, and Apache HTTP session replication JDBC communication with external database systems JMS JTA JDBC store and file store using persistent volumes Inter-domain communication (JMS, Transactions, EJBs, and so on) Auto scaling of a WebLogic cluster Integration with Prometheus monitoring using the WebLogic Monitoring Exporter RMI communication from outside and inside the Kubernetes cluster Upgrading applications Patching WebLogic domains Service migration of singleton services Database leasing In this certification of WebLogic Server on Kubernetes the following configurations and features are not supported: WebLogic domains spanning Kubernetes clusters Whole server migration Use of Node Manager for WebLogic Servers lifecycle management (start/stop) Consensus leasing MultiCast WebLogic Server cluster messaging protocol Multitenancy Production redeployment Flannel with portmap We have released a sample to GitHub (https://github.com/oracle/docker-images/tree/master/OracleWebLogic/samples/wls-k8s-domain)  that show how to  create and run a WebLogic Server domain on Kubernetes.  The README.md in the sample provides all the steps.  This sample extends the certified Oracle WebLogic Server 12.2.1.3 developer install image by creating a sample domain and cluster that runs on Kubernetes. The WebLogic domain consists of an administrator server and several managed servers running in a WebLogic cluster. All WebLogic Server share the same domain home, which is mapped to an external volume. The persistent volumes must have the correct read/write permissions so that all WebLogic Server instances have access to the files in the domain home.  Check out the best practices in the blog WebLogic Server on Kubernetes Data Volume Usage, which explains the WebLogic Server services and files that are typically configured to leverage shared storage, and provides full end-to-end samples that show mounting shared storage for a WebLogic domain that is orchestrated by Kubernetes. After you have this domain up and running you can deploy JMS and JDBC resources.  The blog Run a WebLogic JMS Sample on Kubernetes provides a step-by-step guide to configure and run a sample WebLogic JMS application in a Kubernetes cluster.  This blog also describes how to deploy WebLogic JMS and JDBC resources, deploy an application, and then run the application. This application is based on a sample application named 'Classic API - Using Distributed Destination' that is included in the WebLogic Server sample applications. The application implements a scenario in which employees submit their names when they are at work, and a supervisor monitors employee arrival time. Employees choose whether to send their check-in messages to a distributed queue or a distributed topic. These destinations are configured on a cluster with two active managed servers. Two message-driven beans (MDBs), corresponding to these two destinations, are deployed to handle the check-in messages and store them in a database. A supervisor can then scan all of the check-in messages by querying the database. The follow up blog, Run Standalone WebLogic JMS Clients on Kubernetes, expands on the previous blog and demonstrates running standalone JMS clients communicating with each other through WebLogic JMS services, and database-based message persistence. WebLogic Server and Kubernetes each provide a rich set of features to support application deployment. As part of the process of certifying WebLogic Server on Kubernetes, we have identified a set of best practices for deploying Java EE applications on WebLogic Server instances that run in Kubernetes and Docker environments. The blog Best Practices for Application Deployment on WebLogic Server Running on Kubernetes describes these best practices. They include the general recommendations described in Deploying Applications to Oracle WebLogic Server, and also include the application deployment features provided in Kubernetes. One of the most important tasks in providing optimal performance and security of any software system is to make sure that the latest software updates are installed, tested, and rolled out promptly and efficiently with minimal disruption to system availability.  Oracle provides different types of patches for WebLogic Server, such as Patch Set Updates, and One-Off patches.  The patches you install, and the way in which you install them, depends upon your custom needs and environment. In Kubernetes, Docker, and on-premises environments, we use the same OPatch tool to patch WebLogic Server.  However, with Kubernetes orchestrating the cluster, we can leverage the update strategy options in the StatefulSet controller to roll out the patch from an updated WebLogic Server image.  The blog Patching WebLogic Server in a Kubernetes Environment; explains how. And, of course, a very important aspect of certification is security. We have identified best practices for securing Docker and Kubernetes environments when running WebLogic Server, explained in the blog Security Best Practices for WebLogic Server Running in Docker and Kubernetes. These best practices are in addition to the general WebLogic Server recommendations documented in Securing a Production Environment for Oracle WebLogic Server 12c documentation. In the area of monitoring and diagnostics, we have developed for open source a new tool The WebLogic Monitoring Exporter. WebLogic Server generates a rich set of metrics and runtime state information that provides detailed performance and diagnostic data about the servers, clusters, applications, and other resources that are running in a WebLogic domain. This tool leverages WebLogic’s monitoring and diagnostics capabilities when running in Docker/Kubernetes environments. The blog Announcing The New Open Source WebLogic Monitoring Exporter on GitHub describes how to build the exporter from a Dockerfile and source code in the GitHub project https://github.com/oracle/weblogic-monitoring-exporter. The exporter is implemented as a web application that is deployed to the WebLogic Server managed servers in the WebLogic cluster that will be monitored. For detailed information about the design and implementation of the exporter, see Exporting Metrics from WebLogic Server. Once after the exporter has been deployed to the running managed servers in the cluster and is gathering metrics and statistics, the data is ready to be collected and displayed via Prometheus and Grafana. Follow the blog entry Using Prometheus and Grafana to Monitor WebLogic Server on Kubernetes that steps you through collecting metrics in Prometheus and displaying them in Grafana dashboards. Elasticity (scaling up or scaling down) of a WebLogic Server cluster provides increased reliability of customer applications as well as optimization of resource usage. The WebLogic Server cluster can be automatically scaled by increasing (or decreasing) the number of pods based on resource metrics provided by the WebLogic Diagnostic Framework (WLDF).  When the WebLogic cluster scales up or down, WebLogic Server capabilities like HTTP session replication and service migration of singleton services are leveraged to provide the highest possible availability. Refer to the blog entry Automatic Scaling of WebLogic Clusters on Kubernetes for an illustration of automatic scaling of a WebLogic Server cluster in a Kubernetes cloud environment. In addition to certifying WebLogic Server on Kubernetes, the WebLogic Server team has developed a WebLogic Kubernetes Operator.  A Kubernetes Operator is "an application specific controller that extends the Kubernetes API to create, configure, and manage instances of complex applications".  Please refer to the  WebLogic Kubernetes Operator documentation for Quick Start guides, samples, and information about how the operator can manage the life cycle of the WebLogic domain. The certification of WebLogic Server on Kubernetes encompasses all the various  WebLogic configurations and capabilities described in this blog. Our intent is to enable you to run WebLogic Server in Kubernetes, to run WebLogic Server in the Kubernetes-based Oracle Container Engine that Oracle intends to release shortly, and to enable integration of WebLogic Server applications with applications developed on our Kubernetes-based Container Native Application Development Platform. We hope this information is helpful to customers seeking to deploy WebLogic Server on Kubernetes, and look forward to your feedback        

We are pleased to announce the certification of Oracle WebLogic Server on Kubernetes! As part of this certification, we are releasing a sample on GitHub to create an Oracle WebLogic Server 12.2.1.3,...

Run Standalone WebLogic JMS Clients on Kubernetes

Overview JMS applications are applications that use JMS services to send and receive messages. There are two main types of WebLogic JMS applications: server-side JMS applications and standalone JMS clients. Server-side applications are applications that are running on WebLogic servers or clusters and they are usually Java EE applications like MDBs, servlets and so on. Standalone JMS clients can be applications running on a foreign EE server, desktop applications, or microservices. In my last blog, Run a WebLogic JMS Sample on Kubernetes, we demonstrated WebLogic JMS communication between Java EE applications on Kubernetes and we used file-based message persistence. In this blog, we will expand the previous blog to demonstrate running standalone JMS clients communicating with each other through WebLogic JMS services, and we will use database-based message persistence. First we create a WebLogic domain based on the sample WebLogic domain on GitHub, with an Administrator Server, and a WebLogic cluster. Then we deploy a data source, a JDBC store, and JMS resources to the WebLogic domain on a Kubernetes cluster. After the WebLogic JMS services are ready and running, we create and deploy a Java microservice to the same Kubernetes cluster to send/receive messages to/from the WebLogic JMS destinations. We use REST API and run scripts against the Administration Server pod to deploy the resources which are targeted to the cluster. Creating WebLogic JMS Services on Kubernetes Preparing the WebLogic Base Domain and Data Source If you completed the steps to create the domain, set up the MySQL database, and create the data source as described in the blog Run a WebLogic JMS Sample on Kubernetes, you can go directly to the next section. Otherwise, you need to finish the steps in the following sections of the blog Run a WebLogic JMS Sample on Kubernetes: Section "Creating the WebLogic Base Domain" Section "Setting Up and Running MySQL Server in Kubernetes" Section "Creating a Data Source for the WebLogic Server Domain" Now you should have a WebLogic base domain running on a Kubernetes cluster and a data source which connects to a MySQL database, running in the same Kubernetes cluster. Deploying the JMS Resources with a JDBC Store First, prepare a JSON data file that contains definitions for one database store, one JMS server, and one JMS module. The file will be processed by a Python script to create the resources, one-by-one, using the WebLogic Server REST API. File jms2.json: {"resources": { "jdbc1": { "url": "JDBCStores", "data": { "name": "jdbcStore1", "dataSource": [ "JDBCSystemResources", "ds1" ], "targets": [{ "identity":["clusters", "myCluster"] }] } }, "jms2": { "url": "JMSServers", "data": { "messagesThresholdHigh": -1, "targets": [{ "identity":["clusters", "myCluster"] }], "persistentStore": [ "JDBCStores", "jdbcStore1" ], "name": "jmsserver2" } }, "module": { "url": "JMSSystemResources", "data": { "name": "module2", "targets":[{ "identity": [ "clusters", "myCluster" ] }] } }, "sub2": { "url": "JMSSystemResources/module2/subDeployments", "data": { "name": "sub2", "targets":[{ "identity": [ "JMSServers", "jmsserver2" ] }] } } }} Second, prepare the JMS module file, which contains a connection factory, a distributed queue, and a distributed topic. File module2-jms.xml: <?xml version='1.0' encoding='UTF-8'?> <weblogic-jms xmlns="http://xmlns.oracle.com/weblogic/weblogic-jms" xmlns:sec="http://xmlns.oracle.com/weblogic/security" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:wls="http://xmlns.oracle.com/weblogic/security/wls" xsi:schemaLocation="http://xmlns.oracle.com/weblogic/weblogic-jms http://xmlns.oracle.com/weblogic/weblogic-jms/1.1/weblogic-jms.xsd"> <connection-factory name="cf2"> <default-targeting-enabled>true</default-targeting-enabled> <jndi-name>cf2</jndi-name> <transaction-params> <xa-connection-factory-enabled>true</xa-connection-factory-enabled> </transaction-params> <load-balancing-params> <load-balancing-enabled>true</load-balancing-enabled> <server-affinity-enabled>false</server-affinity-enabled> </load-balancing-params> </connection-factory> <uniform-distributed-queue name="dq2"> <sub-deployment-name>sub2</sub-deployment-name> <jndi-name>dq2</jndi-name> </uniform-distributed-queue> <uniform-distributed-topic name="dt2"> <sub-deployment-name>sub2</sub-deployment-name> <jndi-name>dt2</jndi-name> <forwarding-policy>Partitioned</forwarding-policy> </uniform-distributed-topic> </weblogic-jms> Third, copy these two files to the Administration Server pod. Then, in the Administration Server pod, run the Python script to create all the JMS resources: $ kubectl exec $adminPod -- mkdir /u01/wlsdomain/config/jms/ $ kubectl cp ./module2-jms.xml $adminPod:/u01/wlsdomain/config/jms/ $ kubectl cp ./jms2.json $adminPod:/u01/oracle/ $ kubectl exec $adminPod -- python /u01/oracle/run.py createRes /u01/oracle/jms2.json Launch the WebLogic Server Administration Console by going to your browser and entering the URL http://<hostIP>:30007/console in the address bar. Make sure that all the JMS resources are running successfully. Visit the monitoring page of the destination dq2 to check whether it has two members, jmsserver2@managed-server-0@dq2 and jmsserver2@managed-server-1@dq2. Now that the WebLogic JMS services are ready, JMS messages sent to this service will be stored in the MySQL database. Running the WebLogic JMS Client The JMS client pod is a Java microservice which is based on the openjdk8 image packaged with the WebLogic client JAR file. The client-related scripts are on GitHub which include Dockerfile, JMS client Java files and yaml files. NOTE: You need to get wlthint3client.jar from the installed WebLogic directory $WL_HOME/server/lib and put it in the folder jms-client/container-scripts/lib. Step 1: Build the Docker image for JMS clients and the image will contain the compiled JMS client classes which can be run directly. $ cd jms-client $ docker build -t jms-client . Step 2: Create the JMS client pod. $ kubectl create -f jmsclient.yml Run the Java programs to send and receive messages from the WebLogic JMS destinations. Please replace $clientPod with the actual client pod name. Run the sender program to send messages to the destination dq2. $ kubectl exec -it $clientPod java samples.JMSSender By default, the sender sends 10 messages on each run and these messages are distributed to two members of dq2. Check the Administration Console to verify this. Run the receiver program to receive messages from destination dq2. $ kubectl exec -it $clientPod java samples.JMSReceiver dq2 The receiver uses WebLogic JMSDestinationAvailabilityHelper API to get notifications about the distributed queue's membership change, so the receiver can receive messages from both members of dq2. Please refer to the WebLogic document, "Advanced Programming with Distributed Destinations Using the JMS Destination Availability Helper API", for the detailed usage. Summary In this blog, we expanded our sample Run a WebLogic Sample on Kubernetes to demonstrate using external JMS clients to communicate with WebLogic JMS services running inside a Kubernetes cluster. We leveraged basic Kubernetes facilities to manage WebLogic Server life cycles and used database-based message persistence to persist data beyond the life cycle of the pods. In future blogs, we’ll explore hosting a WebLogic JMS cluster using Oracle’s upcoming fully certified WebLogic Kubernetes ‘operator-based’ Kubernetes environment. In addition, we’ll also explore using WebLogic JMS automatic service migration to migrate JMS instances from shutdown pods to running pods.  

Overview JMS applications are applications that use JMS services to send and receive messages. There are two main types of WebLogic JMS applications: server-side JMS applications and standalone...

Patching WebLogic Server in a Kubernetes Environment

Of course, one of the most important tasks in providing optimal performance and security of any software system is to make sure that the latest software updates are installed, tested, and rolled out promptly and efficiently with minimal disruption to system availability.  Oracle provides different types of patches for WebLogic Server, such as Patch Set Updates, Security Patch Updates, and One-Off patches.  The patches you install, and the way in which you install them, depends upon your custom needs and environment. For WebLogic Server running on Kubernetes, we recently shared in Github the steps for creating a WebLogic Server instance with a shared domain home directory that is mapped to a Kubernetes volume.  In Kubernetes, Docker, and on-premises environments, we use the same OPatch tool to patch WebLogic Server.  However, with Kubernetes orchestrating the cluster, we can leverage the update strategy options in the StatefulSet controller to roll out the patch from an updated WebLogic Server image.  In this blog, I explain how. Prerequisites Create the WebLogic Server environment on Kubernetes based on the instructions provided at https://github.com/oracle/docker-images/tree/master/OracleWebLogic/samples/wls-k8s-domain.  The patching processes described below will be based on the environment created. Make sure that the WebLogic Server One-Off patches or Patch Set Updates are accessible from the environment created in the preceding step. Patch Set Updates and One-Off Patches Patch Set Updates are cumulative patches that include security fixes and critical fixes. They are used to patch Oracle WebLogic Server only and released in a regular basis.  For additional details related to Patch Set Updates, see Fusion Middleware Patching with OPatch. One-Off patches are targeted to solve some known issues or to add feature enhancements. For information about how to download patches, see My Oracle Support. Kubernetes Update Strategies for StatefulSets There are three different update strategy options available for StatefulSets that you can use for the following tasks: To configure and disable automated rolling updates for container images To configure resource requests or limits, or both To configure labels and annotations of the pods For details about the Kubernetes StatefulSets update strategies, see Update StatefulSets at the following location: https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#updating-statefulsets. These update strategies are as follows: On Delete Manually delete the pod in any random sequence based on your environment configuration. When Kubernetes detects that a pod is deleted, it creates a new pod that is based on the specification defined for that StatefulSet.  This is the default update strategy when StatefulSet is created. Rolling Updates Perform a rolling update for all the pods in the cluster.  Kubernetes will delete and re-create all the Pods defined in the StatefulSet controller one at a time, but in reverse ordinal order. Rolling Update + Partition A rolling update can also be partitioned, which is determined by the partition value in the specification defined for the StatefulSet.  For example, if there are four pods in the cluster and the partition value is 2, only two pods are updated.  The other two pods are updated only if partition value is set to 4 or if the partition attribute is removed from the specification. Methods for Updating the StatefulSet Controller In a StatefulSet, there are three attributes that need to be verified prior to roll out: image, imagePullPolicy, and updateStrategy.  Any one of these attributes can be updated by using the in-place update approach.  These in-place options are: kubectl apply Update the yaml file that is used for creating the StatefulSet, and execute kubectl apply -f <statefulset yml file> to roll out the new configuration to the Kubernetes cluster. Sample updated yaml file: kubectl edit Directly update the attribute value using kubectl edit statefulset <statefulset name> Sample edit of a StatefulSet kubectl patch Directly update the attribute using kubectl patch. An example command that updates updateStrategy to RollingUpdate: An example command that updates updateStrategy to RollingUpdate with the partition option: An example command to use JSON format to update the image attribute to wls-k8s-domain-v1: Kubernetes Dashboard Drill down to the specific StatefulSet from the menu path and update the value of image, imagePullPolicy, and updateStrategy. Steps to Apply One-Off Patches and Patch Set Updates with an External Domain Home To create a patched WebLogic image with a new One-Off patch and apply it to all the pods: Complete the steps in Example of Image with WLS Domain in Github to create a patched WebLogic image. If the Kubernetes cluster is configured on multiple nodes, and the newly created image is not available in the Docker registry, complete the steps provided in docker save and docker load to copy the image to all the nodes in the cluster. Update the controller definition using one of the 3 methods described in Methods for Updating the StatefulSet Controller.  If you want Kubernetes to automatically apply the new image on all the pods for the StatefulSet, you can set the updateStrategy value to RollingUpdate. Apply the new image on the admin-server pod. Because there is only one Administration Server in the cluster, the preferred option is to use the RollingUpdate update strategy option. After the change is committed to the StatefulSet controller, Kubernetes will delete and re-create the Administration Server pod automatically. Apply the new image to all the pods defined in the Managed Server StatefulSet: a)     For the OnDelete option, get the list of the pods in the cluster and change the updateStrategy value to OnDelete. You need to manually delete all the pods in the cluster to roll out the new image, using the following commands: b)    For the RollingUpdate option, after you change the updateStrategy value to RollingUpdate, Kubernetes will delete and re-create the pods created for the Managed Server instances in a rolling fashion, but in reverse ordinal order, as shown in Figure 1 below. c)     If the Partition attribute is added to the RollingUpdate value, the rolling update order depends on the partition value. Kubernetes will roll out the new image to the pods with the ordinal whereby the order is greater than or equal to the partition value. Figure 1: Before and After Patching the Oracle Home Image Using the Kubernetes Statefulset RollingUpdate Strategy Roll Back In case you need to rollback the patch, you use the same steps as when applying a new image; that is, by changing the image value to the one for the original image.  You should retain at least two or three versions of the images in the registry. Monitoring There are several ways you can monitor the rolling update progress of your WebLogic domain:  Use the kubectl command to check the pod status, For example, the following output is produced when doing a rolling update of two Managed Server pods:   kubectl get pod -o wide   kubectl rollout status statefulset 2.  You can use the REST API to query the Administration Server to monitor the status of the Managed Servers during the rolling update. For information about how to use the REST API to monitor WebLogic Server, see Oracle WebLogic RESTful Management Services: From Command Line to JavaFX. The following example command queries the status of the Administration Server: The proceeding command generates output similar to the following: 3.  You can use the WebLogic Server Administration Console to monitor the status of the update. The server instance wls-domain-ms-1 is stopped: The update is done on wls-domain-ms-1, then switched to wls-domain-ms-0 :   4. A fourth way is to use the Kubernetes Dashboard.  From your browser enter the URL https:<hostname>:<nodePort>    Summary The process for applying a One-Off Patch or Patch Set Updates to WebLogic Server on Kubernetes is the same as when running in a bare metal environment.  When you use a Kubernetes policy of Statefulset, we recommend that you create a new patched image by extending the image with a previous version and then using the update strategy (OnDelete, RollingUpdate, or "RollingUpdate + partition") that is best suited for your environment. In a future blog, we will explore the patch options available with Kubernetes Operator. You might be able to integrate some of the manual steps shared in above with Operator to further simplify the overall WebLogic Server patching process when running on Kubernetes.  

Of course, one of the most important tasks in providing optimal performance and security of any software system is to make sure that the latest software updates are installed, tested, and rolled out...

Best Practices for Application Deployment on WebLogic Server Running on Kubernetes

Overview WebLogic Server and Kubernetes each provide a rich set of features to support application deployment. As part of the process of certifying WebLogic Server on Kubernetes, we have identified a set of best practices for deploying Java EE applications on WebLogic Server instances that run in Kubernetes and Docker environments. This blog describes those best practices. They include the general recommendations described in Deploying Applications to Oracle WebLogic Server, and also include the application deployment features provided in Kubernetes. Application Deployment Terminology  Both WebLogic Server and Kubernetes use similar terms for resources they manage, but with different meanings. For example, the notion of application or deployment has slightly different meaning, which can create confusion. The table below lists key terms that are used in this blog and how they are defined differently in WebLogic Server and Kubernetes. See Kubernetes Reference Documentation for a standardized glossary with a list of Kubernetes terminology. Table 1 Application Deployment Terminology   WebLogic Server Kubernetes Application A Java EE application (an enterprise application or Web application) or a standalone Java EE module (such as an EJB or resource adapter) that has been organized according to the Java EE specification. An application unit includes a web application, enterprise application, enterprise javaBean, resource adapter, web service, Java EE library or an optional package. An application unit may also include JDBC, JMS, WLDF modules or a client application archive. A software that is containerized and managed in a cluster environment by Kubernetes. WebLogic Server is an example of a Kubernetes application. Application Deployment A process of making a Java Enterprise Edition (Java EE) application or module available for processing client requests in WebLogic Server. A way of packaging, instantiating, running and communicating the containerized applications in a cluster environment. Kubernetes also has an API object called a Deployment that manages a replicated application. Deployment Tool weblogic.Deployer utility Administration Console WebLogic Scripting Tool (WLST) wldeploy Ant task weblogic-maven-plugin Maven plug-in WebLogic Deployment API Auto-deployment feature kubeadm kubectl minikube Helm Chart kops Cluster A WebLogic cluster consists of multiple WebLogic Server instances running simultaneously and working together to provide increased scalability and reliability. A cluster appears to clients to be a single WebLogic Server instance. The server instances that constitute a cluster can run on the same machine, or be located on different machines. You can increase a cluster's capacity by adding additional server instances to the cluster on an existing machine, or you can add machines to the cluster to host the incremental server instances. Each server instance in a cluster must run the same version of WebLogic Server. A Kubernetes cluster consists of a master node and a set of worker nodes. In a production environment these run in a distributed setup on multiple nodes. For testing purposes, all the components can run on the same node (either a physical host or a virtual machine). Within the context of this blog, the following definitions are used: The application mentioned in this page is the Java EE application.  The application deployment in this page is the Java EE application deployment on WebLogic Server.  A Kubernetes application is the software managed by Kubernetes. For example, a WebLogic Server. Summary of Best Practices for Application Deployment in Kubernetes In this blog, the best practices for application deployment on WebLogic Server running in Kubernetes includes several parts: Distributing Java EE application deployment files to a Kubernetes environment so the WebLogic Server containers in pods can access the deployment files. Deploying Java EE applications in a Kubernetes environment so the applications are available for the WebLogic Server containers in pods to process the client requests. Integrating Kubernetes applications with the ReadyApp framework to check the Kubernetes applications' readiness status. General Java EE Application Deployment Best Practices Overview Before drilling down into the best practices details, let’s briefly review the general Java EE application deployment best practices, which are described in Deploying Applications to Oracle WebLogic Server. The general Java EE application deployment process involves multiple parts, mainly: Preparing the Java EE application or module. See Preparing Applications and Modules for Deployment, including Best Practices for Preparing Deployment Files. Configuring the Java EE application or module for deployment. See Configuring Applications for Production Deployment, including Best Practices for Managing Application Configuration. Exporting the Java EE application or module for deployment to a new environment. See Exporting an Application for Deployment to New Environments, including Best Practices for Exporting a Deployment Configuration. Deploying the Java EE application or module. See Deploying Applications and Modules with weblogic.Deployer, including Best Practices for Deploying Applications. Redeploying the Java EE application or module. See Redeploying Applications in a Production Environment. Distributing Java EE Application Deployment Files in Kubernetes Assume the WebLogic Server instances have been deployed into Kubernetes and Docker environments. Before you deploy the Java EE applications on WebLogic Server instances, the Java EE application deployment files, for example, the EAR, WAR, RAR files,  need to be distributed to the locations that can be accessed by the WebLogic Server instances in the pods. In Kubernetes, the deployment files can be distributed by means of a Docker images, or manually by an administrator. Pre-distribution of Java EE Applications in Docker Images A Docker image can contain a pre-built WebLogic Server domain home directory that has one or more Java EE applications deployed to it. When the containers in the pods are created and started using the same Docker image, all containers should have the same Java EE applications deployed to them. If the Java EE applications in the Docker image are updated to a newer version, a new Docker image can be created on top of the current existing Docker image, as shown in Figure 1. However as newer application versions are introduced, additional layers are needed in the image, which consumes more resources, such as disk space. Consequently, having an excessive number of layers in the Docker image is not recommended. Figure 1 Pre-distribution of Java EE Application in layered Docker Images Using Volumes in a Kubernetes Cluster Application files can be shared among all the containers in all the pods by mapping the application volume directory in the pods to an external directory on the host. This makes the application files accessible to all the containers in the pods. When using volumes, the application files need to be copied only once to the directory on the host. There is no need to copy the files to each pod. This saves disk space and the deployment time especially for large applications. Using volumes is recommended for distributing the Java EE applications to WebLogic Server instances running in Kubernetes. Figure 2 Mounting Volumes to an External Directory As shown in Figure 2, every container in each of the three pods has an application volume directory /shared/applications. Each of these directories is mapped to the same external directory on the host: /host/apps. After the administrator puts the application file simpleApp.war in the /host/apps directory on the host, this file can then be accessed by the containers in each pod from the /shared/applications directory. Note that Kubernetes supports different volume types. For information about determining the volume type to use, creating the volume directory, determining the medium that backs it, and identifying the contents of the volume, see Volumes in the Kubernetes documentation. Best Practices for Distributing Java EE Application Deployment Files in Kubernetes Use volumes to persist and share the application files across the containers in all pods. On-disk files in a container are ephemeral. When using a pre-built WebLogic Server domain home in a Docker image, use a volume to store the domain home directory on the host. A sample WebLogic domain wls-k8s-domain that includes a pre-built WebLogic Server domain home directory is available from GitHub at https://github.com/oracle/docker-images/tree/master/OracleWebLogic/samples/wls-k8s-domain Store the application files in a volume whose location is separate from the domain home volume directory on the host. A deployment plan generated for an existing Java EE web application that is deployed to WebLogic Server can be stored in a volume as well. For more details about using the deployment plan,  see the tutorial at http://www.oracle.com/webfolder/technetwork/tutorials/obe/fmw/wls/12c/09-DeployPlan--4464/deployplan.htm. By default all processes in WebLogic Server pods are running with user ID 1000 and group ID 1000. Make sure that the proper access permissions are set to the application volume directory so that user ID 1000 or group ID 1000 has read and write access to the application volume directory. Java EE Application Deployment in Kubernetes After the application deployment files are distributed throughout the Kubernetes cluster, you have several WebLogic Server deployment tools to choose from for deploying the Java EE applications to the containers in the pods. WebLogic Server supports the following deployment tools for deploying, undeploying and redeploying the Java EE applications: WebLogic Administration Console WebLogic Scripting Tool (WLST) weblogic.Deployer utility REST API wldeploy Ant task The WebLogic Deployment API which allows you to perform deployment tasks programmatically using Java classes. The auto-deployment feature. When auto-deployment is enabled, copying an application into the /autodeploy directory of the Administration Server causes that application to be deployed automatically. Auto-deployment is intended for evaluation or testing purposes in a development environment only For more details about using these deployment tools, see Deploying Applications to Oracle WebLogic Server. These tools can also be used in Kubernetes. The following samples show multiple ways to deploy and undeploy an application simpleApp.war in a WebLogic cluster myCluster.  Using WLST in a Dockerfile Using the weblogic.Deployer utility Using the REST API Note that the environment in which the deployment command is run is created based upon the sample WebLogic domain wls-k8s-domain available on GitHub at  https://github.com/oracle/docker-images/tree/master/OracleWebLogic/samples/wls-k8s-domain. In this environment,  A sample WLS 12.2.1.3 domain and cluster are created by extending the Oracle WebLogic developer install image and running it in Kubernetes. The WebLogic domain (for example base_domain) consists of an Admininstrator Server and several Managed Servers running in the WebLogic cluster myCluster. Each WebLogic Server is started in a container. Each pod has one WebLogic Server container. For details about the wls-k8s-domain sample, see the GitHub page. Each pod has one domain home volume directory (for example /u01/wlsdomain). This domain home volume directory is mapped to an external directory (for example /host/domain). The sample WLS 12.2.1.3 domain is created under this external directory. Each pod can have an application volume directory (for example /shared/applications) created in the same way as the domain home volume directory. This application volume directory is mapped to an external directory (for example /host/apps). The Java EE applications can be distributed to this external directory. Sample of Using Offline WLST in a Dockerfile to Deploy a Java EE Application In this sample, a Dockerfile is used for building an application Docker image. This application Docker image extends a wls-k8s-domain image that creates the sample wls-k8s-domain domain. This Dockerfile also calls WLST with a py script to update the sample wls-k8s-domain domain configuration with a new application deployment in a offline mode. # Dockerfile # Extends wls-k8s-domain FROM wls-k8s-domain # Copy the script files and call a WLST script. COPY container-scripts/* /u01/oracle/ # Run a py script to add a new application deployment into the domain configuration RUN wlst /u01/oracle/app-deploy.py  The script app-deploy.py is called to deploy the application simpleApp.war using the Offline WLST apis: # app-deploy.py # Read the domain readDomain(domainhome) # Create application # ================== cd('/') app = create('simpleApp', 'AppDeployment') app.setSourcePath('/shared/applications/simpleApp.war') app.setStagingMode('nostage') # Assign application to cluster # ================================= assign('AppDeployment', 'simpleApp, 'Target', 'myCluster') # Update domain. Close It. Exit # ================================= updateDomain() closeDomain() exit() The application is deployed during the application Docker image build phase. When a WebLogic Server container is started, the simpleApp application is started and is ready to service the client requests. Sample of Using weblogic.Deployer to Deploy and Undeploy a Java EE Application in Kubernetes In this sample, the application simpleApp.war exists in the external directory: /host/apps which is located on the host as described in the prior section: Using External Volumes in Kubernetes Cluster. The following commands show running the webloigc.Deployer utility in the Adminstration Server pod: # Find the pod id for Admin Server pod: admin-server-1238998015-f932w > kubectl get pods NAME READY STATUS RESTARTS AGE admin-server-1238998015-f932w 1/1 Running 0 11m managed-server-0 1/1 Running 0 11m managed-server-1 1/1 Running 0 8m # Find the Admin Server service name that can be connected to from the deployment command. # Here the Admin Server service name is admin-server which has a port 8001. > kubectl get services NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE admin-server 10.102.160.123 <nodes> 8001:30007/TCP 11m kubernetes 10.96.0.1 <none> 443/TCP 39d wls-service 10.96.37.152 <nodes> 8011:30009/TCP 11m wls-subdomain None <none> 8011/TCP 11m # Execute the /bin/bash in the Admin Server pod > kubectl exec -it admin-server-1238998015-f932w /bin/bash # Once in the Admin Server pod, setup a WebLogic env, then run weblogic.Deployer # to deploy the simpleApp.war located in the /shared/applications directory to # the cluster "myCluster" ]$ cd /u01/wlsdomain/base_domain/bin ]$ . setDomainEnv.sh ]$ java weblogic.Deployer -adminurl t3://admin-server:8001 -user weblogic -password weblogic1 -name simpleApp -targets myCluster -deploy /shared/applications/simpleApp.war The next command verifies that the Java EE application deployment to the WebLogic cluster is completed successfully: # Kubernetes routes the traffic to both managed-server-0 and managed-server-1 via the wls-service port 30009. http://<hostIP>:30009/simpleApp/Scrabble.jsp The following command uses the weblogic.Deployer utility to undeploy the application. Note its similarity to the steps for deployment: # Execute the /bin/bash in the Admin Server pod > kubectl exec -it admin-server-1238998015-f932w /bin/bash # Undeploy the simpleApp ]$ cd /u01/wlsdomain/base_domain/bin ]$ . setDomainEnv.sh ]$ java weblogic.Deployer -adminurl t3://admin-server:8001 -user weblogic -password weblogic1 -undeploy -name simpleApp Sample of Using REST APIs to Deploy and Undeploy a Java EE Application in Kubernetes In this sample, the application simpleApp.war has already been distributed to the host directory /host/apps. This host directory, in turn, mounts to the application volume directory /shared/applications, which is in the pod admin-server-1238998015-f932w. The following example shows executing a curl command in the pod admin-server-1238998015-f932w. This curl command sends a REST request to the Adminstration Server using NodePort 30007 to deploy the simpleApp to the WebLogic cluster myCluster. # deploy simpleApp.war file to the WebLogic cluster > kubectl exec admin-server-1238998015-f932w -- curl -v --user weblogic:weblogic1 \           -H X-Requested-By:MyClient \           -H Content-Type:application/json \           -d "{ name: 'simpleApp', \                 sourcePath: '/shared/applications/simpleApp.war', \                 targets: [ { identity: [ 'clusters', 'myCluster' ] } ] }" \           -X POST http://<hostIP>:30007/management/weblogic/latest/edit/appDeployments The following command uses the REST API to undeploy the application: # undeploy simpleApp.war file from the WebLogic cluster > kubectl exec admin-server-1238998015-f932w -- curl -v --user weblogic:weblogic1 \           -H X-Requested-By:MyClient \           -H Accept:application/json \           -X DELETE http://<hostIP>:30007/management/wls/latest/deployments/application/id/simpleApp Best Practices for Deploying Java EE Applications in Kubernetes Deploy Java EE applications or modules to a WebLogic cluster, instead of individual WebLogic Server instances. This simplifies scaling the WebLogic cluster later because changes to deployment strategy are not necessary. WebLogic Server deployment tools can be used in the Kubernetes environment. When updating an application, follow the same steps as described above to distribute and deploy the application. When using a pre-built WebLogic Server domain home in a Docker image, deploying the applications to the domain automatically updates the domain configuration. However deploying applications this way results in the domain configuration in the pods to become out-of-sync with the domain configuration in the Docker image. You can avoid this synchronization issue whenever possible by including the required applications in the pre-built domain home in the Docker image. This way you can avoid extra deployment steps later on. Integrating ReadyApp Framework in Kubernetes Readiness Probe Kubernetes provides a flexible approach to configuring load balancers and frontends in a way that isolates clients from the details of how services are deployed. As part of this approach, Kubernetes performs and reacts to a readiness probe to determine when a container is ready to accept traffic. By contrast, WebLogic Server provides the ReadyApp framework, which reports whether the WebLogic Server instance startup is completed and ready to service client requests. The ReadyApp framework uses two states: READY and NOT READY. The READY state means that not only is a WebLogic Server instance in a RUNNING state, but also that all applications deployed on the WebLogic Server instance are ready to service requests. When in the NOT READY state, the WebLogic Server instance startup is incomplete and is unable to accept traffic. When starting a WebLogic Server container in a Kubernetes environment, you can use a Kubernetes readiness probe to access the ReadyApp framework on WebLogic Server. When the ReadyApp framework reports a READY state of a WebLogic Server container startup, the readiness probe notifies Kubernetes that the traffic to the WebLogic Server container may begin. The following example shows how to use the ReadyApp framework integrated in a readiness probe to determine whether a WebLogic Server container running on the port 8011 is ready to accept traffic. apiVersion: apps/v1beta1 kind: StatefulSet metadata: [...] spec:   [...]   template:     [...]     spec:       containers:         [...]         readinessProbe:           failureThreshold: 3           httpGet:           path: /weblogic/ready           port: 8011           scheme: HTTP [...] The ReadyApp framework on WebLogic Server can be accessed from the URL: http://<hostIP>:<port>/weblogic/ready When WebLogic Server is running, this URL returns a page with either a status 200 (READY) or 503 (NOT READY). When WebLogic Server is not running, an Error 404 page appears.  Similar to WebLogic Server, other Kubernetes applications can register with the ReadyApp framework and use a readiness probe to check the state of the ReadyApp framework on the Kubernetes applications. See Using the ReadyApp Framework for information about how to register an application with the ReadyApp framework.  Best Practices for Integrating ReadyApp Framework in Kubernetes Readiness Probe The use of the ReadyApp framework to register Kubernetes applications and also of the readinessProbe to check the status of the ReadyApp framework to determine whether the applications are ready to service requests, is recommended. Only when the status of the ReadyApp framework is in a READY state, Kubernetes routes traffic to those Kubernetes applications. Conclusion When integrating WebLogic Server in Kubernetes and Docker environments, customers can use the existing powerful WebLogic Server deployment tools to deploy their Java EE applications onto WebLogic Server instances running in Kubernetes. Customers also can use Kubernetes features to manage WebLogic Server: they can use volumes to share the application files with all the containers among all pods in a Kubernetes cluster, and also of the readinessProbe to monitor WebLogic Server startup state, and more. This integration not only allows customers to support flexible deployment scenarios that fit into their company's business practices, but also provides ways to quickly deploy WebLogic Server in a cloud environment, to autoscale it on the fly, and to update it seamlessly.

Overview WebLogic Server and Kubernetes each provide a rich set of features to support application deployment. As part of the process of certifying WebLogic Server on Kubernetes, we have identified...

WebLogic Server on Kubernetes Data Volume Usage

As part of certifying WebLogic Server on Kubernetes, we have identified best practices for sharing file data among WebLogic Server pods that are running in a Kubernetes environment. In this blog, I review the WebLogic Server services and files that are typically configured to leverage shared storage, and I provide full end-to-end samples, which you can download and run, that show mounting shared storage for a WebLogic domain that is orchestrated by Kubernetes. WebLogic Server Persistence in Volumes When running WebLogic Server on Kubernetes, refer to the blog Docker Volumes in WebLogic for information about the advantages of using data volumes. This blog also identifies the WebLogic Server artifacts that are good candidates for being persisted in those data volumes. Kubernetes Solutions In a Kubernetes environment, pods are ephemeral. To persist data, Kubernetes provides the Volume abstraction, and the PersistentVolume (PV) and PersistentVolumeClaim (PVC) API resources. Based on the official Kubernetes definitions [Kubernetes Volumes and Kubernetes Persistent Volumes and Claims], a PV is a piece of storage in the cluster that has been provisioned by an administrator, and a PVC is a request for storage by a user. Therefore, PVs and PVCs are independent entities outside of pods. They can be easily referenced by a pod for file persistence and file sharing among pods inside a Kubernetes cluster. When running WebLogic Server on Kubernetes, using PVs and PVCs to handle shared storage is recommended for the following reasons: Usually WebLogic Server instances run in pods on multiple nodes that require access to a shared PV. The life cycle of a WebLogic Server instance is not limited to a single pod. PVs and PVCs can provide more control. For example, the ability to specify: access modes for concurrent read/write management, mount options provided by volume plugins, storage capacity requirements, reclaim policies for resources, and more. Use Cases of Kubernetes Volumes for WebLogic Server To see the details about the samples, or to run them locally, please download the examples and follow the steps provided below. Software Versions Host machine: Oracle Linux 7u3 UEK4 (x86-64) Kubernetes v1.7.8 Docker 17.03 CE Prepare Dependencies Build the oracle/weblogic:12.2.1.3-developer image locally based on the Dockerfile and scripts at https://github.com/oracle/docker-images/tree/master/OracleWebLogic/dockerfiles/12.2.1.3/. Download the WebLogic Kubernets domain sample source code from https://github.com/oracle/docker-images/tree/master/OracleWebLogic/samples/wls-k8s-domain. Put the sample source code to a local folder named wls-k8s-domain. Build the WebLogic domain image locally based on the Dockerfile and scripts. $ cd wls-k8s-domain $ docker build -t was-k8s-domain . For Use Case 2, below, prepare a NFS server and a shared directory by entering the following commands (in this example I use machine 10.232.128.232). Note that Use Case 1 uses a host path instead of NFS and does not require this step. # systemctl start rpcbind.service # systemctl start nfs.service # systemctl start nfslock.service $ mkdir -p /scratch/nfsdata $ chmod o+rw /scratch/nfadata # echo /scratch/nfsdata *(rw,fsid=root,no_root_squash,no_subtree_check) >> /etc/exports By default, in the WebLogic domain wls-k8s-domain, all processes in pods that contain WebLogic Server instances run with user ID 1000 and group ID 1000. Proper permissions need to be set to the external NFS shared directory to make sure that user ID 1000 and group ID 1000 have read and write permission to the NFS volume. To simplify the permissions management in the examples, we grant read and write permission to others to the shared directory as well. Use Case 1: Host Path Mapping at Individual Machine with a Kubernetes Volume The WebLogic domain consists of an Administration Server and multiple Managed Servers, each running inside its own pod. All pods have volumes directly mounted to a folder on the physical machine. The domain home is created in a shared folder when the Administration Server pod is first started. At runtime, all WebLogic Server instances, including the Administration Server, share the same domain home directory via a mounted volume. Note: This example runs on a single machine, or node, but this approach also works when running the WebLogic domain across multiple machines. When running on multiple machines, each WebLogic Server instance must share the same directory. In turn, the host path can refer to this directory, thus access to the volume is controlled by the underlying shared directory. Given a set of machines that are already set up with a shared directory, this approach is simpler than setting up an NFS client (although maybe not as portable). To run this example, complete the following steps: Prepare the yml file for the WebLogic Administration Server. Edit wls-admin.yml to mount the host folder /scratch/data to /u01/wlsdomain in the Administration Server pod: apiVersion: apps/v1beta1 kind: Deployment metadata: name: admin-server spec: replicas: 1 template: metadata: labels: app: admin-server spec: containers: - name: admin-server image: wls-k8s-domain imagePullPolicy: Never command: ["sh"] args: ["/u01/oracle/startadmin.sh"] readinessProbe: httpGet: path: /weblogic/ready port: 8001 initialDelaySeconds: 15 timeoutSeconds: 5 ports: - containerPort: 8001 env: - name: WLUSER valueFrom: secretKeyRef: name: wlsecret key: username - name: WLPASSWORD valueFrom: secretKeyRef: name: wlsecret key: password volumeMounts: # name must match the volume name below - name: domain-home mountPath: "/u01/wlsdomain" volumes: - name: domain-home hostPath: path: /scratch/data type: Directory Prepare the yml file for the Managed Servers. Edit wls-stateful.yml to mount the host folder /scratch/data to /u01/wlsdomain in the Managed Server pods: apiVersion: apps/v1beta1 # for versions before 1.6.0 use extensions/v1beta1 kind: StatefulSet metadata: name: managed-server spec: serviceName: wls-subdomain replicas: 2 template: metadata: name: ms labels: app: managed-server spec: subdomain: wls-subdomain containers: - name: managed-server image: wls-k8s-domain imagePullPolicy: Never command: ["sh"] args: ["/u01/oracle/startms.sh"] readinessProbe: httpGet: path: /weblogic/ready port: 8011 initialDelaySeconds: 15 timeoutSeconds: 5 ports: - containerPort: 8011 env: - name: JAVA_OPTIONS value: "-Dweblogic.StdoutDebugEnabled=true" - name: USER_MEM_ARGS value: "-Xms64m -Xmx256m " - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: DNS_DOMAIN_NAME value: "wls-subdomain" - name: WLUSER valueFrom: secretKeyRef: name: wlsecret key: username - name: WLPASSWORD valueFrom: secretKeyRef: name: wlsecret key: password volumeMounts: # name must match the volume name below - name: domain-home mountPath: "/u01/wlsdomain" volumes: - name: domain-home hostPath: path: /scratch/data type: Directory Create the Administration Server and Managed Server pods with the shared volume. These WebLogic Server instances will start from the mounted domain location. $ kubectl create -f wls-admin.yml $ kubectl create -f wls-stateful.yml Use Case 2: NFS Sharing with Kubernetes PV and PVC This example shows a WebLogic Server cluster with one Administration Server and several Managed Server instances, each server residing in a dedicated pod. All the pods have volumes mounted to a central NFS server that is located in a physical machine that the pods can reach. The first time the Administration Server pod is started, the WebLogic domain is created in the shared NFS folder. At runtime, all WebLogic Server instances, including the Administration Server, share the same domain home directory via a mounted volume by PV and PVC. In this sample we have the NFS server on host 10.232.128.232, which has a read/write export to all external hosts on /scratch/nfsdata. Prepare the PV. Edit pv.yml to make sure each WebLogic Server instance has read and write access to the NFS shared folder: kind: PersistentVolume apiVersion: v1 metadata: name: pv1 labels: app: wls-domain spec: storageClassName: manual capacity: storage: 10Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Recycle # Retain, Recycle, Delete nfs: # Please use the correct NFS server host name or IP address server: 10.232.128.232 path: "/scratch/nfsdata" Prepare the PVC. Edit pvc.yml: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: wlserver-pvc-1 labels: app: wls-server spec: storageClassName: manual accessModes: - ReadWriteMany resources: requests: storage: 10Gi Kubernetes will find the matching PV for the PVC, and bind them together [Kubernetes Persistent Volumes and Claims]. Create the PV and PVC: $ kubectl create -f pv.yml $ kubectl create -f pvc.yml Then check the PVC status to make sure it binds to the PV: $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE wlserver-pvc-1 Bound pv1 10Gi RWX manual 7s Prepare the yml file for the Administration Server. It has a reference to the PVC wlserver-pvc-1. Edit wls-admin.yml to mount the NFS shared folder to /u01/wlsdomain in the WebLogic Server Administration Server pod: apiVersion: apps/v1beta1 kind: Deployment metadata: name: admin-server spec: replicas: 1 template: metadata: labels: app: admin-server spec: containers: - name: admin-server image: wls-k8s-domain imagePullPolicy: Never command: ["sh"] args: ["/u01/oracle/startadmin.sh"] readinessProbe: httpGet: path: /weblogic/ready port: 8001 initialDelaySeconds: 15 timeoutSeconds: 5 ports: - containerPort: 8001 env: - name: WLUSER valueFrom: secretKeyRef: name: wlsecret key: username - name: WLPASSWORD valueFrom: secretKeyRef: name: wlsecret key: password volumeMounts: # name must match the volume name below - name: domain-home mountPath: "/u01/wlsdomain" volumes: - name: domain-home persistentVolumeClaim: claimName: wlserver-pvc-1 Prepare the yml file for the Managed Servers. It has a reference to the PVC wlserver-pvc-1. Edit wls-stateful.yml to mount the NFS shared folder to /u01/wlsdomain in each Managed Server pod: apiVersion: apps/v1beta1 # for versions before 1.6.0 use extensions/v1beta1 kind: StatefulSet metadata: name: managed-server spec: serviceName: wls-subdomain replicas: 2 template: metadata: name: ms labels: app: managed-server spec: subdomain: wls-subdomain containers: - name: managed-server image: wls-k8s-domain imagePullPolicy: Never command: ["sh"] args: ["/u01/oracle/startms.sh"] readinessProbe: httpGet: path: /weblogic/ready port: 8011 initialDelaySeconds: 15 timeoutSeconds: 5 ports: - containerPort: 8011 env: - name: JAVA_OPTIONS value: "-Dweblogic.StdoutDebugEnabled=true" - name: USER_MEM_ARGS value: "-Xms64m -Xmx256m " - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: DNS_DOMAIN_NAME value: "wls-subdomain" - name: WLUSER valueFrom: secretKeyRef: name: wlsecret key: username - name: WLPASSWORD valueFrom: secretKeyRef: name: wlsecret key: password volumeMounts: - name: domain-home mountPath: "/u01/wlsdomain" volumes: - name: domain-home persistentVolumeClaim: claimName: wlserver-pvc-1 Create the Administration Server and Managed Server pods with the NFS shared volume. Each WebLogic Server instance will start from the mounted domain location: $ kubectl create -f wls-admin.yml $ kubectl create -f wls-stateful.yml Summary This blog describes the best practices of setting Kubernetes data volumes when running a WebLogic domain in a Kubernetes environment. Because Kubernetes pods are ephemeral, it is a best practice to persist the WebLogic domain to volumes, as well as files such as logs, stores, and so on. Kubernetes provides persistent volumes and persistent volume claims to simplify externalizing state and persisting important data to volumes. We provide two use cases: the first describes how to map the volume to a host machine where the Kubernetes nodes are running; and the second describes how to use an NFS shared volume. In both use cases, all WebLogic Server instances must have access to the files that are mapped to these volumes.

As part of certifying WebLogic Server on Kubernetes, we have identified best practices for sharing file data among WebLogic Server pods that are running in a Kubernetes environment. In this blog, I...

Exporting Metrics from WebLogic Server

As it runs, WebLogic Server generates a rich set of metrics and runtime state information. Several thousand individual metrics are available to capture performance data, such as invocation counts, session activity, work manager threads, and so forth. These metrics are very useful for tracking activity, diagnosing problems, and ensuring sufficient resources are available. Exposed through both JMX and web services, these metrics are supported by Oracle administration tools, such as Enterprise Manager and the WebLogic Server Administration Console, as well as third-party clients.  One of those third-party clients is Prometheus. Prometheus is an open source monitoring toolkit that is commonly used in cloud environments as a framework for gathering, storing, and querying time series data. A number of exporters have been written to scrape information from various services and feed that information into a Prometheus server. Once there, this data can be retrieved using Prometheus itself or other tools that can process Prometheus data, such as Grafana. Oracle customers have been using the generic Prometheus JMX Exporter to scrape information from WebLogic Server instances, but this solution is hampered by usability issues and scalability at larger sites. Consider the following portion of an MBean tree: In this tree, ServerRuntime represents the top of the MBean tree and has several ApplicationRuntime MBeans, each of which has multiple ComponentRuntime MBeans. Some of those are of type WebAppComponentRuntime, which has multiple Servlet MBeans. We can configure the JMX Exporter as follows: jmxUrl: service:jmx:t3://@HOST@:@PORT@/jndi/weblogic.management.mbeanservers.runtime  username: system  password: gumby1234  lowercaseOutputName: false  lowercaseOutputLabelNames: false  whitelistObjectNames:    - "com.bea:ServerRuntime=*,Type=ApplicationRuntime,*"    - "com.bea:Type=WebAppComponentRuntime,*"    - "com.bea:Type=ServletRuntime,*"    rules:    - pattern: "^com.bea<ServerRuntime=.+, Name=(.+), ApplicationRuntime=(.+), Type=ServletRuntime, WebAppComponentRuntime=(.+)><>(.+): (.+)"      attrNameSnakeCase: true      name: weblogic_servlet_$4      value: $5      labels:        name: $3        app: $2        servletName: $1      - pattern: "^com.bea<ServerRuntime=(.+), Name=(.+), ApplicationRuntime=(.+), Type=WebAppComponentRuntime><>(.+): (.+)$"      attrNameSnakeCase: true      name: webapp_config_$4      value: $5      labels:        app: $3        name: $2 This selects the appropriate MBeans and allows the exporter to generate metrics such as: webapp_config_open_sessions_current_count{app="receivables",name="accounting"} 3  webapp_config_open_sessions_current_count{app="receivables",name="inventory"} 7  weblogic_servlet_invocations_total_count{app="receivables",name="accounting",servletName="Balance"} 0  weblogic_servlet_invocations_total_count{app="receivables",name="accounting",servletName="Login"} 0  weblogic_servlet_invocations_total_count{app="receivables",name="inventory",servletName="Count"} 0  weblogic_servlet_invocations_total_count{app="receivables",name="inventory",servletName="Reorder"} 0 However, this approach has challenges. The JMX Exporter can be difficult to set up because it must run as a Java agent. In addition, because JMX is built on top of RMI, and JMX over RMI/IIOP has been removed from the JRE as of Java SE 9, the exporter must be packaged with a platform-specific RMI implementation. The JMX Exporter is also somewhat processor-intensive. It requires a separate invocation of JMX to obtain each bean in the tree, which adds to the processing that must be done by the server. And configuring the exporter can be difficult because it relies on MBean names and regular expressions. While it is theoretically possible to select a subset of the attributes for a given MBean, in practice that adds further complexity to the regular expressions, thereby making it impractical. As a result, it is common to scrape everything and incur the transport and storage costs, and then to apply filtering only when the data is eventually viewed. The WebLogic Monitoring Exporter Along with JMX, Oracle WebLogic Server 12.2.1 and later provides a RESTful Management Interface for accessing runtime state and metrics. Included in this interface is a powerful bulk access capability that allows a client to POST a query that describes exactly what information is desired and to retrieve a single response that includes only that information. Oracle has now created the WebLogic Monitoring Exporter, which takes advantage of this interface. This exporter is implemented as a web application that is deployed to the WebLogic Server instance being monitored. Its configuration explicitly follows the MBean tree, starting below the ServerRuntime MBean. To obtain the same result as in the previous example, we could use the following: metricsNameSnakeCase: true   queries:     - applicationRuntimes:       key: name       keyName: app       componentRuntimes:         type: WebAppComponentRuntime         prefix: webapp_config_         key: name         values: [openSessionsCurrentCount, openSessionsHighCount]         servlets:           prefix: weblogic_servlet_           key: servletName This exporter can scrape the desired metrics with a single HTTP query rather than multiple JMX queries, requires no special setup, and provides an easy way to select the metrics that should be produced for an MBean, while defaulting to using all available fields. Note that the exporter does not need to specify a URL because it always connects to the server on which it is deployed, and does not specify username and password, but rather requires its clients to specify them when attempting to read the metrics. Managing the Application Because the exporter is a web application, it includes a landing page: Not only does the landing page include the link to the metrics, but it also displays the current configuration. When the app is first loaded, the configuration that’s used is the one embedded in the WAR file. However, the landing page contains a form that allows you to change the configuration by selecting a new yaml file. Only the queries from the new file are used, and we can combine queries by selecting the Append button before submitting. For example, we could add some JVM metrics: The new metrics will be reported the next time a client accesses the metrics URL. The new elements above will produce metrics such as: jvm_heap_free_current{name="myserver"} 285027752 jvm_heap_free_percent{name="myserver"} 71 jvm_heap_size_current{name="myserver"} 422051840 Metrics in a WebLogic Server Cluster In a WebLogic Server cluster, of course, it is of little value to change the metrics collected by a single server instance; because all cluster members are serving the same applications, we want them to report the same metrics. To do this, we need a way to have all the servers respond to the changes made to any one of them. The exporter does this by using a separate config_coordinator process to track changes. To use it, we need to add a new top-level element to the initial configuration that describes the query synchronization: query_sync:    url: http://coordinator:8099    refreshInterval: 10   This specifies the URL of the config_coordinator process, which runs in its own Docker container. When the exporter first starts, and its configuration contains this element, it will contact the coordinator to see if it already has a new configuration. Thereafter, it will do so every time either the landing page or the metrics page is queried. The optional refreshInterval element limits how often the exporter looks for a configuration update. When it finds one, it will load it immediately without requiring a server restart. When you update the configuration in an exporter that is configured to use the coordinator, the new queries are sent to the coordinator where other exporters can load them. In this fashion, an entire cluster of Managed Servers can have its metrics configurations kept in sync. Summary The WebLogic Monitoring Exporter greatly simplifies the process of exporting metrics from clusters of WebLogic Server instances in a Docker/Kubernetes environment. It does away with the need to figure out MBean names and work with regular expressions. It also allows metric labels to be defined explicitly from field names, and then automatically uses those definitions for metrics from subordinate MBeans, ensuring consistency. In our testing, we have found enormous improvements in performance using it versus the JMX Exporter. It uses less CPU and responds more quickly. In the graphs below, the green lines represent the JMX Exporter, and the yellow lines represent the WebLogic Monitoring Exporter. We expect users who wish to monitor WebLogic Server performance will gain great benefits from our efforts. See Using Prometheus and Grafana to Monitor WebLogic Server on Kubernetes for more information.

As it runs, WebLogic Server generates a rich set of metrics and runtime state information. Several thousand individual metrics are available to capture performance data, such as invocation...

The WebLogic Server

Announcing the New WebLogic Monitoring Exporter 

Very soon we will be announcing certification of WebLogic Server on Kubernetes.  To give our users the best possible experience when running WebLogic domains in Docker/Kubernetes environments, we have developed the WebLogic Monitoring Exporter.  This new tool exposes WebLogic Server metrics that can be read and collected by monitoring tools such as Prometheus, and displayed in Grafana. We are also making the WebLogic Monitoring Exporter tool available in open source here, which will allow our community to contribute to this project and be part of enhancing it.  As it runs, WebLogic Server generates a rich set of metrics and runtime state information that provides detailed performance and diagnostic data about the servers, clusters, applications, and other resources that are running in a WebLogic domain.  The WebLogic Monitoring Exporter enables administrators of Kubernetes environments to easily monitor this data using tools like Prometheus and Grafana, tools that are commonly used for monitoring Kubernetes environments. For more information on the design and implementation of the WebLogic Monitoring Exporter, see Exporting Metrics from WebLogic Server. For more information on Using Prometheus and Grafana to Monitor WebLogic Server on Kubernetes see WebLogic on Kubernetes monitoring using Prometheus and Grafana. Stay tuned for more information about WebLogic Server certification on Kubernetes. Our intent is to enable you to run WebLogic Server in Kubernetes, to run WebLogic Server in the Kubernetes-based Oracle Container Engine, and to enable integration of WebLogic Server applications with applications developed on our Kubernetes-based Container Native Application Development Platform.

Very soon we will be announcing certification of WebLogic Server on Kubernetes.  To give our users the best possible experience when running WebLogic domains in Docker/Kubernetes environments, we have...

Run a WebLogic JMS Sample on Kubernetes

Overview This blog is a step-by-step guide to configuring and running a sample WebLogic JMS application in a Kubernetes cluster. First we explain how to create a WebLogic domain that has an Administration Server, and a WebLogic cluster. Next we add WebLogic JMS resources and a data source, deploy an application, and finally run the application. This application is based on a sample application named 'Classic API - Using Distributed Destination' that is included in the WebLogic Server sample applications. This application implements a scenario in which employees submit their names when they arrive, and a supervisor monitors employee arrival time. Employees choose whether to send their check-in messages to a distributed queue or a distributed topic. These destinations are configured on a cluster with two active Managed Servers. Two message-driven beans (MDBs), corresponding to these two destinations, are deployed to handle the check-in messages and store them in a database. A supervisor can then scan all of the check-in messages by querying the database. The two main approaches for automating WebLogic configuration changes are WLST and the REST API. To run the scripts, WLST or REST API, in a WebLogic domain on Kubernetes, you have two options: Running the scripts inside Kubernetes cluster pods — If you use this option,  use 'localhost', NodePort service name, or Statefulset's headless service name,  pod IP,  Cluster IP, and the internal ports. The instructions in this blog use 'localhost'. Running the scripts outside the Kubernetes cluster — If you use this option, use hostname/IP and the NodePort. In this blog we use the REST API and run the scripts within the Administration Server pod to deploy all the resources. All the resources are targeted to the whole cluster which is the recommended approach for WebLogic Server on Kubernetes because it works well when the cluster scales up or scales down. Creating the WebLogic Base Domain We use the sample WebLogic domain in GitHub to create the base domain. In this WebLogic sample you will find a Dockerfile, scripts, and yaml files to build and run the WebLogic Server instances and cluster in the WebLogic domain on Kubernetes. The sample domain contains an Administration Server named AdminServer and a WebLogic cluster with four Managed Servers named managed-server-0 through managed-server-3. We configure four Managed Servers but we start only the first two: managed-server-0 and managed-server-1.  One feature that distinguishes a JMS service from others is that it's highly stateful and most of its data needs to be kept in a persistent store, such as persistent messages, durable subscriptions, and so on. A persistent store can be a database store or a file store, and in this sample we demonstrate how to use external volumes to store this data in file stores. In this WebLogic domain we configure three persistent volumes for the following: The domain home folder – This volume is shared by all the WebLogic Server instances in the domain; that is, the Administration Server and all Managed Server instances in the WebLogic cluster. The file stores – This volume is shared by the Managed Server instances in the WebLogic cluster. A MySQL database – The use of this volume is explained later in this blog. Note that by default a domain home folder contains configuration files, log files, diagnostic files, application binaries, and the default file store files for each WebLogic Server instance in the domain. Custom file store files are also placed in the domain home folder by default, but we customize the configuration in this sample to place these files in a separate, dedicated persistent volume. The two persistent volumes – one for the domain home, and one for the customer file stores – are shared by multiple WebLogic Servers instances. Consequently, if the Kubernetes cluster is running on more than one machine, these volumes must be in a shared storage. Complete the steps in the README.md file to create and run the base domain. Wait until all WebLogic Server instances are running; that is, the Administration Server and two Managed Servers. This may take a short while because Managed Servers are started in sequence after the Administration Server is running and the provisioning of the initial domain is complete. $ kubectl get pod NAME READY    STATUS    RESTARTS    AGE admin-server-1238998015-kmbt9  1/1      Running   0           5m managed-server-0               1/1      Running   0          3m managed-server-1               1/1      Running   0   3m Note that in the commands used in this blog you need to replace $adminPod and $mysqlPod with the actual pod names. Deploying the JMS Resources with a File Store When the domain is up and running, we can deploy the JMS resources. First, prepare a JSON data file that contains definitions for one file store, one JMS server, and one JMS module. The file will be processed by a Python script to create the resources, one-by-one, using the WebLogic Server REST API. file jms1.json: {"resources": { "filestore1": { "url": "fileStores", "data": { "name": "filestore1", "directory": "/u01/filestores/filestore1", "targets": [{ "identity":["clusters", "myCluster"] }] } }, "jms1": { "url": "JMSServers", "data": { "messagesThresholdHigh": -1, "targets": [{ "identity":["clusters", "myCluster"] }], "persistentStore": [ "fileStores", "filestore1" ], "name": "jmsserver1" } }, "module": { "url": "JMSSystemResources", "data": { "name": "module1", "targets":[{ "identity": [ "clusters", "myCluster" ] }] } }, "sub1": { "url": "JMSSystemResources/module1/subDeployments", "data": { "name": "sub1", "targets":[{ "identity": [ "JMSServers", "jmsserver1" ] }] } } }} Second, prepare the JMS module file, which contains a connection factory, a distributed queue, and a distributed topic. file module1-jms.xml: <?xml version='1.0' encoding='UTF-8'?> <weblogic-jms xmlns="http://xmlns.oracle.com/weblogic/weblogic-jms" xmlns:sec="http://xmlns.oracle.com/weblogic/security" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:wls="http://xmlns.oracle.com/weblogic/security/wls" xsi:schemaLocation="http://xmlns.oracle.com/weblogic/weblogic-jms http://xmlns.oracle.com/weblogic/weblogic-jms/1.1/weblogic-jms.xsd"> <connection-factory name="cf1"> <default-targeting-enabled>true</default-targeting-enabled> <jndi-name>cf1</jndi-name> <transaction-params> <xa-connection-factory-enabled>true</xa-connection-factory-enabled> </transaction-params> <load-balancing-params> <load-balancing-enabled>true</load-balancing-enabled> <server-affinity-enabled>false</server-affinity-enabled> </load-balancing-params> </connection-factory> <uniform-distributed-queue name="dq1"> <sub-deployment-name>sub1</sub-deployment-name> <jndi-name>dq1</jndi-name> </uniform-distributed-queue> <uniform-distributed-topic name="dt1"> <sub-deployment-name>sub1</sub-deployment-name> <jndi-name>dt1</jndi-name> <forwarding-policy>Partitioned</forwarding-policy> </uniform-distributed-topic> </weblogic-jms> Third, copy these two files to the Administration Server pod, then run the Python script to create the JMS resources within the Administration Server pod: $ kubectl exec $adminPod -- mkdir /u01/wlsdomain/config/jms/ $ kubectl cp ./module1-jms.xml $adminPod:/u01/wlsdomain/config/jms/ $ kubectl cp ./jms1.json $adminPod:/u01/oracle/ $ kubectl exec $adminPod -- python /u01/oracle/run.py createRes /u01/oracle/jms1.json Launch the WebLogic Server Administration Console, by going to your browser and entering the URL http://<hostIP>:30007/console in the address bar, and make sure that all the JMS resources are running successfully. Deploying the Data Source Setting Up and Running MySQL Server in Kubernetes This sample stores the check-in messages in a database. So let's set up MySQL Server and get it running in Kubernetes. First, let's prepare the mysql.yml file, which defines a secret to store encrypted username and password credentials, a persistent volume claim (PVC) to store database data in an external directory, and a MySQL Server deployment and service. In the base domain, one persistent volume is reserved and available so that it can be used by the PVC that is defined in mysql.yml. file mysql.yml: apiVersion: v1 kind: Secret metadata: name: dbsecret type: Opaque data: username: bXlzcWw= password: bXlzcWw= rootpwd: MTIzNHF3ZXI= --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim labels: app: mysql-server spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 10Gi --- apiVersion: apps/v1beta1 kind: Deployment metadata: name: mysql-server spec: replicas: 1 template: metadata: labels: app: mysql-server spec: containers: - name: mysql-server image: mysql:5.7 imagePullPolicy: IfNotPresent ports: - containerPort: 3306 env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: dbsecret key: rootpwd - name: MYSQL_USER valueFrom: secretKeyRef: name: dbsecret key: username - name: MYSQL_PASSWORD valueFrom: secretKeyRef: name: dbsecret key: password - name: MYSQL_DATABASE value: "wlsdb" volumeMounts: - mountPath: /var/lib/mysql name: db-volume volumes: - name: db-volume persistentVolumeClaim: claimName: mysql-pv-claim --- apiVersion: v1 kind: Service metadata: name: mysql-server labels: app: mysql-server spec: ports: - name: client port: 3306 protocol: TCP targetPort: 3306 clusterIP: None selector: app: mysql-server Next, deploy MySQL Server to the Kubernetes cluster: $ kubectl create -f mysql.yml Creating the Sample Application Table First, prepare the DDL file for the sample application table: file sampleTable.ddl: create table jms_signin ( name varchar(255) not null, time varchar(255) not null, webServer varchar(255) not null, mdbServer varchar(255) not null); Next, create the table in MySQL Server: $ kubectl exec -it $mysqlPod -- mysql -h localhost -u mysql -pmysql wlsdb < sampleTable.ddl Creating a Data Source for the WebLogic Server Domain We need to configure a data source so that the sample application can communicate with MySQL Server. First, prepare the ds1-jdbc.xml module file. file ds1-jdbc.xml: <?xml version='1.0' encoding='UTF-8'?> <jdbc-data-source xmlns="http://xmlns.oracle.com/weblogic/jdbc-data-source" xmlns:sec="http://xmlns.oracle.com/weblogic/security" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:wls="http://xmlns.oracle.com/weblogic/security/wls" xsi:schemaLocation="http://xmlns.oracle.com/weblogic/jdbc-data-source http://xmlns.oracle.com/weblogic/jdbc-data-source/1.0/jdbc-data-source.xsd"> <name>ds1</name> <datasource-type>GENERIC</datasource-type> <jdbc-driver-params> <url>jdbc:mysql://mysql-server:3306/wlsdb</url> <driver-name>com.mysql.jdbc.Driver</driver-name> <properties> <property> <name>user</name> <value>mysql</value> </property> </properties> <password-encrypted>mysql</password-encrypted> <use-xa-data-source-interface>true</use-xa-data-source-interface> </jdbc-driver-params> <jdbc-connection-pool-params> <capacity-increment>10</capacity-increment> <test-table-name>ACTIVE</test-table-name> </jdbc-connection-pool-params> <jdbc-data-source-params> <jndi-name>jndi/ds1</jndi-name> <algorithm-type>Load-Balancing</algorithm-type> <global-transactions-protocol>EmulateTwoPhaseCommit</global-transactions-protocol> </jdbc-data-source-params> <jdbc-xa-params> <xa-transaction-timeout>50</xa-transaction-timeout> </jdbc-xa-params> </jdbc-data-source> Then deploy the data source module to the WebLogic Server domain: $ kubectl cp ./ds1-jdbc.xml $adminPod:/u01/wlsdomain/config/jdbc/ $ kubectl exec $adminPod -- curl -v \ --user weblogic:weblogic1 \ -H X-Requested-By:MyClient \ -H Accept:application/json \ -H Content-Type:application/json \ -d '{ "name": "ds1", "descriptorFileName": "jdbc/ds1-jdbc.xml", "targets":[{ "identity":["clusters", "myCluster"] }] }' -X POST http://localhost:8001/management/weblogic/latest/edit/JDBCSystemResources Deploying the Servlet and MDB Applications First, download the two application archives: signin.war and signinmdb.jar. Enter the commands below to deploy these two applications using REST APIs within the pod running the WebLogic Administration Server. # copy the two app files to admin pod $ kubectl cp signin.war $adminPod:/u01/wlsdomain/signin.war $ kubectl cp signinmdb.jar $adminPod:/u01/wlsdomain/signinmdb.jar # deploy the two app via REST api $ kubectl exec $adminPod -- curl -v \ --user weblogic:weblogic1 \ -H X-Requested-By:MyClient \ -H Content-Type:application/json \ -d "{ name: 'webapp', sourcePath: '/u01/wlsdomain/signin.war', targets: [ { identity: [ 'clusters', 'myCluster' ] } ] }" \ -X POST http://localhost:8001/management/weblogic/latest/edit/appDeployments $ kubectl exec $adminPod -- curl -v \ --user weblogic:weblogic1 \ -H X-Requested-By:MyClient \ -H Content-Type:application/json \ -d "{ name: 'mdb', sourcePath: '/u01/wlsdomain/signinmdb.jar', targets: [ { identity: [ 'clusters', 'myCluster' ] } ] }" \ -X POST http://localhost:8001/management/weblogic/latest/edit/appDeployments Next, go to the WebLogic Server Administration Console (http://<hostIP>:30007/console) to verify the applications have been successfully deployed and running. Running the Sample Invoke the application on the Managed Server by going to a browser and entering the URL http://<hostIP>:30009/signIn/. Using a number of different browsers and machines to simulate multiple web clients, submit several unique employee names. Then check the result by entering the URL http://<hostIP>:30009/signIn/response.jsp. You can see that there are two different levels of load balancing taking place: HTTP requests are load balanced among Managed Servers within the cluster. Notice the entries beneath the column labeled Web Server Name. For each employee check-in, thiscolumn identifies the name of the WebLogic Server instance that contains the servlet instance that is processing the corresponding HTTP request. JMS messages that are sent to a distributed destination are load balanced among the MDB instances within the cluster. Notice the entries beneath the column labeled MDB Server Name. This column identifies the name of the WebLogic Server instance that contains the MDB instance that is processing the message. Restarting All Pods Restart the MySQL pod, WebLogic Administration Server pod and WebLogic Managed Server pods. This will demonstrate that the data in your external volumes is indeed preserved independently of your pod life cycles. First, gracefully shut down the MySQL Server pod: $ kubectl exec -it $mysqlpod /etc/init.d/mysql stop After the MySQL Server pod is stopped, the Kubernetes control panel will restart it automatically. Next, follow the section "Restart Pods" in the README.md in order to restart all WebLogic servers pods.  $ kubectl get pod NAME     READY   STATUS   RESTARTS   AGE admin-server-1238998015-kmbt9   1/1     Running  1         7d managed-server-0                1/1     Running  1          7d managed-server-1                1/1     Running  1     7d mysql-server-3736789149-n2s2l   1/1     Running  1          3h You will see that the restart count for each pod has increased from 0 to 1. After all pods are running again, access the WebLogic Server Administration Console to verify that the servers are in running state. After servers restart all messages will get recovered. You'll get the same results as you did prior to the restart because all data is persisted in the external data volumes and therefore can be recovered after the pods are restarted. Cleanup Enter the following command to clean up the resources used by the MySQL Server instance: $ kubectl delete -f mysql.yml Next, follow the steps in the "Cleanup" section of the README.md to remove the base domain and delete all other resources used by this example. Summary and Futures This blog helped demonstrate using Kubernetes as a flexible and scalable environment for hosting WebLogic Server JMS cluster deployments. We leveraged basic Kubernetes facilities to manage WebLogic server life-cycles, used file based message persistence, and demonstrated intra-cluster JMS communication between Java EE applications. We also demonstrated that File based JMS persistence works well when externalizing files to a shared data volume outside the Kubernetes pods, as this persists data beyond the life cycle of the pods. In future blogs we’ll explore hosting a WebLogic JMS cluster using Oracle’s upcoming fully certified WebLogic’s Kubernetes ‘operator based’ Kubernetes environment. In addition, we’ll also explore using external JMS clients to communicate with WebLogic JMS services running inside a Kubernetes cluster, using database instead of file persistence, and using WebLogic JMS automatic service migration to automatically migrate JMS instances from shutdown pods to running pods.

Overview This blog is a step-by-step guide to configuring and running a sample WebLogic JMS application in a Kubernetes cluster. First we explain how to create a WebLogic domain that has an...

Using Prometheus and Grafana to Monitor WebLogic Server on Kubernetes

As part of the release of our general availability (GA) version of the WebLogic Server Kubernetes Operator, the WebLogic team has created instructions that show how to create and run a WebLogic Server domain on Kubernetes.  The README.md in the project provides all the steps. In the area of monitoring and diagnostics, for open source, we have developed a new tool, WebLogic Monitoring Exporter, which was implemented to scrape runtime metrics for specific WebLogic Server instances and feed them to the Prometheus and Grafana tools. The Weblogic Monitoring Exporter is a web application that you can deploy on a WebLogic Server instance that you want to monitor. The exporter uses the WebLogic Server 12.2.1.x RESTful Management Interface for accessing runtime state and metrics. For a detailed description of WebLogic Monitoring Exporter configuration and usage, see The WebLogic Monitoring Exporter. In this blog you will learn how to configure Prometheus and Grafana to monitor WebLogic Server instances that are running in Kubernetes clusters. Monitoring Using Prometheus We’ll be using the WebLogic Monitoring Exporter to scrape WebLogic Server metrics and feed them to Prometheus. Previous blog entries have described how to start and run WebLogic Server instances in Kubernetes with the WebLogic Monitoring Exporter deployed on Managed Servers running in the cluster. To make sure that the WebLogic Monitoring Exporter is deployed and running, click the link: http://[hostname]:30011/wls-exporter/metrics You will be prompted for the WebLogic user credentials that are required to access the metrics data, which are weblogic/welcome1. The metrics page should show the metrics configured for the WebLogic Monitoring Exporter: To create a Prometheus instance in Kubernetes, create a Prometheus configuration file, prometheus-deployment.yml. A sample file is provided in our sample, which may be modified as required to match your environment. The above example of the Prometheus configuration file specifies: -    weblogic/welcome1 as the user credentials -    5 seconds as the interval between updates of WebLogic Server metrics -    32000 as the external port to access the Prometheus dashboard -    Use of namespace: ‘monitoring’ You can change these values as required to reflect your specific environment and configuration. To create the namespace ‘monitoring’:     $ kubectl create -f monitoring-namespace.yaml To create the corresponding RBAC policy to allow the required permission for the pods. See the provided sample here:     $ kubectl create -f crossnsrbac.yaml The RBAC policy in the sample uses the namespaces: ‘weblogic-domain’ for WebLogic Server domain deployment ‘weblogic-operator’ for WebLogic Server Kubernetes Start Prometheus to monitor the Managed Server instances:    $ kubectl create -f prometheus-deployment.yaml Verify that Prometheus is monitoring all Managed Servers by browsing to http://[hostname]:32000. Examine the Insert metric at cursor pull-down. It should list metric names based on the current configuration of the WebLogic Monitoring Exporter web application.   To check that the WebLogic Monitoring Exporter is configured correctly, connect to the web page at  http//:[hostname]:30011/wls-exporter. The current configuration will be listed there. For example: Below is the corresponding WebLogic Monitoring Exporter configuration yml file: metricsNameSnakeCase: true queries: - applicationRuntimes:     key: name     keyName: app     componentRuntimes:       type: WebAppComponentRuntime       prefix: webapp_config_       key: name       values: [deploymentState, contextRoot, sourceInfo, openSessionsHighCount, openSessionsCurrentCount, sessionsOpenedTotalCount, sessionCookieMaxAgeSecs, sessionInvalidationIntervalSecs, sessionTimeoutSecs, singleThreadedServletPoolSize, sessionIDLength, servletReloadCheckSecs, jSPPageCheckSecs]       servlets:         prefix: weblogic_servlet_         key: servletName         values: [invocationTotalCount, reloadTotal, executionTimeAverage, poolMaxCapacity, executionTimeTotal, reloadTotalCount, executionTimeHigh, executionTimeLow] - JVMRuntime:     key: name     values: [heapFreeCurrent, heapFreePercent, heapSizeCurrent, heapSizeMax, uptime, processCpuLoad] The configuration listed above was embedded into the WebLogic Monitoring Exporter WAR file. To change or add more metrics data, simply connect to the landing page at http//:[hostname]:30011/wls-exporter and use the Append or Replace buttons to load the configuration file in yml format. For example, workmanager.yml: metricsNameSnakeCase: true queries: - applicationRuntimes:     key: name     workManagerRuntimes:       prefix: workmanager_       key: applicationName       values: [pendingRequests, completedRequests, stuckThreadCount] By constructing Prometheus-defined queries, you can generate any data output you require to monitor and diagnose the servers, applications, and resources that are running in your WebLogic domain. For example, you can enter the following into the query box, and Prometheus will return current data from all running Managed Servers in the WebLogic cluster: weblogic_servlet_execution_time_average > 1 Prometheus also generates graphs that are based on provided data. For example, if you click on the Graph tab, Prometheus will generate a graph showing the number of servlets with an average execution time exceeding the threshold time by 1 second or more.   Monitoring Using Grafana For better visual presentation and dashboards with multiple graphs, use Grafana. Here is an example configuration file, grafana-deployment.yaml, which can be used to start Grafana in the Kubernetes environment. To start Grafana to monitor the Managed Servers, use the following kubectl command:  $ kubectl create -f grafana-deployment.yaml To start Grafana to monitor the Managed Servers, use the following kubectl command:  $ kubectl create -f grafana-kubernetes.yml Connect to Grafana at http://[hostname]:31000. Log in to the home page with the username admin, and the password pass. The Grafana home page will be displayed. To connect Grafana to Prometheus, select Add Data Source and then enter the following values:        Name:    Prometheus        Type:      Prometheus        Url:         http://prometheus:9090        Access:   Proxy Select the Dashboards tab and click Import: Now we are ready to generate a dashboard to monitor WebLogic Server. Complete the following steps: Click the Grafana symbol in the upper left corner of the home page, and select Dashboards add new. Select Graph and pull it into the empty space. It will generate an empty graph panel:​ Click on Click on the panel and select the edit option. It will open an editable panel where you can customize how the metrics graph will be presented. In the Graph panel, select the General tab, and enter WebLogic Servlet Execution Average Time as the title. Select the Metrics tab, then select the Prometheus option in the Panel Data Source pull-down menu. If you click in the empty Metric lookup field, all metrics configured in the WebLogic Monitoring Exporter will be pulled in, the same way as in Prometheus. Let’s enter the same query we used in the Prometheus example, weblogic_servlet_execution_time_average > 1. The generated graphs will show data for all available servlets with an average execution time greater than 1 second, on all Managed Servers in the cluster. Each color represents a specific pod and servlet combination. To show data for particular pod, simply click on the corresponding legend. This will remove all other pods’ data from the graph, and their legends will no longer be highlighted. To add more data, just press the shift key and click on any desired legend. To reset, just click the same legend again, and all others will be redisplayed the graph. To customize the legend, click the desired values in the Legend Format field. For example: {{pod_name}} :appName={{webapp}} : servletName={{servletName}} Grafana will begin to display your customized legend. If you click the graph, you can see all values for the selected time: Select the Graph → Legend tab to obtain more options for customizing the legend view. For example, you can move the placement of the legend, show the minimum, maximum, or average values, and more. By selecting the Graph → Axes tab, you can switch units to the corresponding metrics data, in this example it is time(millisecs): Grafana also provides alerting tools. For example, we can configure an alert for specified conditions. In the example below, Grafana will fire an alert if the average servlet execution time is greater than 100 msec. It will also send an email to the administrator: Last, we want our graph to be refreshed every 5 seconds, the same refresh interval as the Prometheus scrape interval. We can also customize the time range for monitoring the data. To do that, we need to click on right upper corner of the created dashboard. By default, it is configured to show metrics for the prior 6 hours up to the current time. Perform the desired changes. For example, switch to refresh every 5 seconds and click Apply: When you are done, simply click the ‘save’ icon in the upper left corner of the window, and enter a name for the dashboard. Summary WebLogic Server today has a rich set of metrics that can be monitored using well-known tools such as the WebLogic Server Administration Console and the Monitoring Dashboard. These tools are used to monitor the WebLogic Server instances, applications, and resources running in a WebLogic deployment in Kubernetes. In this container ecosystem, tools like Prometheus and Grafana offer an alternative way of exporting and monitoring the metrics from clusters of WebLogic Server instances running in Kubernetes. It also makes monitored data easy to collect, access, present, and customize in real time without restarting the domain. In addition, it provides a simple way to create alerts and send notifications to any interested parties. Start using it, you will love it!

As part of the release of our general availability (GA) version of the WebLogic Server Kubernetes Operator, the WebLogic team has created instructions that show how to create and run a WebLogic Server...

The WebLogic Server

How to... WebLogic Server on Kubernetes

The WebLogic Server team is working on certifying WebLogic domains being orchestrated in Kubernetes.  As part of this work we are releasing a series of blogs that answer questions our users might have, and describe best practices for running WebLogic Server on Kubernetes. These blogs cover topics such as security best practices, monitoring, logging, messaging, transactions, scaling clusters, externalizing state in volumes, patching, updating applications, and much more.  Our first blog walks you through a sample on GitHub that lets you jump right in and try it!  We will continue to update this list of blogs to make it easy for you to follow them, so stay tuned. Security Best Practices for WebLogic Server Running in Docker and Kubernetes Automatic Scaling of WebLogic Clusters on Kubernetes Let WebLogic work with Elastic Stack in Kubernetes Docker Volumes in WebLogic Exporting Metrics from WebLogic Server Using Prometheus and Grafana to Monitor WebLogic Server on Kubernetes Run a WebLogic JMS Sample on Kubernetes Run Standalone WebLogic JMS Clients on Kubernetes Best Practices for Application Deployment on WebLogic Server Running on Kubernetes Patching WebLogic Server in a Kubernetes Environment WebLogic Server on Kubernetes Data Volume Usage T3 RMI Communication for WebLogic Server Running on Kubernetes Processing the Oracle WebLogic Kubernetes Operator Logs using Elastic Stack Using Prometheus to Automatically Scale WebLogic Clusters on Kubernetes WebLogic Dynamic Clusters on Kubernetes How to run WebLogic clusters on the Oracle Cloud Infrastructure Container Engine for Kubernetes WebLogic Server JTA in a Kubernetes Environment Voyager/HAProxy as Load Balancer to Weblogic Domains in Kubernetes Running WebLogic on Open Shift Easily Create an OCI Container Engine for Kubernetes cluster with Terraform Installer to run WebLogic Server Automate WebLogic image building and patching! WebLogic Operator 2.2 Support for ADF Applications Portable WebLogic Domains Using Kubernetes Operator Configuration Overrides End to end example of monitoring WebLogic Server with Grafana dashboards on the OCI Container Engine for Kubernetes Automating WebLogic Deployment - CI/CD with WebLogic Tooling WebLogic announces support for CRI-O container runtime

The WebLogic Server team is working on certifying WebLogic domains being orchestrated in Kubernetes.  As part of this work we are releasing a series of blogs that answer questions our users might...

The WebLogic Server

Migrating from Multi Data Source to Active GridLink - Take 2

In the original blog article on this topic at this link, I proposed that you delete the multi data source (MDS) and create a replacement Active GridLink (AGL) data source.  In the real world, the multi data source is likely referenced by another objects like a JDBC store and deleting the MDS will create an invalid configuration.  Further, those objects using connections from the MDS will fail during and after this re-configuration.  That implies that for this type of operation the related server needs to be shut down, the configuration updated with offline WLST, and the server restarted.  The administration console cannot be used for this type of migratoin.  Except for the section that describes using the console, the other information in the earlier blog article still aplies to this process.  No changes should be required in the application, only in the configuration, because we preserve the JNDI name. The following is a sample of what the offline WLST script might look like.  You could parameterize it and make it more flexible in handling multiple datasources. # java weblogic.WLST file.py import sys, socket, os # Application values dsName='myds' memberds1='ds1' memberds2='ds2' domain='/domaindir onsNodeList=host1:6200,host2:6200' url='jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=' \  + '(ADDRESS=(PROTOCOL=TCP)(HOST=host1)(PORT=1521))' \  + '(ADDRESS=(PROTOCOL=TCP)(HOST=host2)(PORT=1521)))' \  + '(CONNECT_DATA=(SERVICE_NAME=servicename)))' user='user1' password='password1' readDomain(domain) # change type from MDS to AGL # The following is for WLS 12.1.2 and 12.1.3 if not setting # FanEnabled true, which is not recommended # set('ActiveGridlink','true') # The following is for WLS 12.2.1 and later #cd('/JDBCSystemResources/' + dsName + '/JdbcResource/' + dsName ) #set('DatasourceType', 'AGL') # set the AGL parameters cd('/JDBCSystemResources/' + dsName + '/JdbcResource/' + dsName) create('myJdbcOracleParams','JDBCOracleParams') cd('JDBCOracleParams/NO_NAME_0') set('FanEnabled','true') set('OnsNodeList',onsNodeList) # Set the data source parameters cd('/JDBCSystemResources/' + dsName + '/JdbcResource/' + dsName + '/JDBCDataSourceParams/NO_NAME_0') set('GlobalTransactionsProtocol','None') unSet('DataSourceList') unSet('AlgorithmType') # Set the driver parameters cd('/JDBCSystemResources/' + dsName + '/JdbcResource/' + dsName) create('myJdbcDriverParams','JDBCDriverParams') cd('JDBCDriverParams/NO_NAME_0') set('Url',url) set('DriverName','oracle.jdbc.OracleDriver') set('PasswordEncrypted',password) create('myProps','Properties') cd('Properties/NO_NAME_0') create('user', 'Property') cd('Property') cd('user') set('Value', user) # Set the connection pool parameters cd('/JDBCSystemResources/' + dsName + '/JdbcResource/' + dsName) create('myJdbcConnectionPoolParams','JDBCConnectionPoolParams') cd('/JDBCSystemResources/' + dsName + '/JdbcResource/' + dsName + '/JDBCConnectionPoolParams/NO_NAME_0') set('TestTableName','SQL ISVALID') # remove member data sources if they are not needed cd('/') delete(memberds1, 'JDBCSystemResource') delete(memberds2, 'JDBCSystemResource') updateDomain() closeDomain() exit()   In WLS 12.1.2 and 12.1.3, there is an explicit ActiveGridlink flag that can be used to identify an AGL datasource, even if FanEnabled is not set to true (which is not recommended) and auto-ONS is used (12.1.2 is the first release in which auto-ONS is supported). In the script above, uncomment the line to set it only if FanEnabled and OnsNodeList are not set. Starting in WLS 12.2.1, there is an explicit datasource type at the JDBCResource level. If that is set to GENERIC, you must re-set it using set("DatasourceType", "AGL"). In the script above, uncomment the lines to set it.  In this case, the ActiveGridlink flag is not necessary. If you are using an Oracle wallet for ONS, that needs to be added to the JDBCOracleParams object as well.  Prior to WLS 12.2.1, the ONS information needs to be explicitly specified.  In WLS 12.2.1 and later, the ONS information can be excluded and the database will try to determine the correct information.  For more complex ONS topologies, the configuration can be specified using the format described in http://docs.oracle.com/middleware/1221/wls/JDBCA/gridlink_datasources.htm#BABGJBIC.   Note: the unSet() method was not added to offline WLST until WLS 12.2.1.2.0.  There is a related patch to add this feature to WLS 12.1.3 at Patch 25695948.  For earlier releases, one option is to edit the MDS descriptor file and delete the lines for "data-source-list" and "algorithm-type" and commend out the "unSet()" calls before running the offline WLST script.  Another option is to run the following online WLST script, which does support the unSet() method.  However, the server will need to be restarted after the update and before the member datasources can be deleted. # java weblogic.WLST file.py import sys, socket, os # Application values dsName='myds' memberds1='ds1' memberds2='ds2' onsNodeList=host1:6200,host2:6200' url='jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=' \  + '(ADDRESS=(PROTOCOL=TCP)(HOST=host1)(PORT=1521))' \  + '(ADDRESS=(PROTOCOL=TCP)(HOST=host2)(PORT=1521)))' \  + '(CONNECT_DATA=(SERVICE_NAME=otrade)))' user='user1' password='password1' hostname='localhost' admin='weblogic' adminpw='welcome1' connect(admin,adminpw,"t3://"+hostname+":7001") edit() startEdit() # change type from MDS to AGL # The following is for WLS 12.1.2 and 12.1.3 if not setting # FanEnabled to  true.  It is recommended to always set FanEnabled to true. # cd('/JDBCSystemResources/' + dsName) # set('ActiveGridlink','true') # The following is for WLS 12.2.1 and later # cd('/JDBCSystemResources/' + dsName + '/JDBCResource/' + dsName ) # set('DatasourceType', 'AGL') # set the AGL parameters cd('/JDBCSystemResources/' + dsName + '/JDBCResource/' + dsName + '/JDBCOracleParams/' + dsName) set('FanEnabled','true') set('OnsNodeList',onsNodeList) # Set the data source parameters cd('/JDBCSystemResources/' + dsName + '/JDBCResource/' + dsName + '/JDBCDataSourceParams/' + dsName) set('GlobalTransactionsProtocol','None') cmo.unSet('DataSourceList') cmo.unSet('AlgorithmType') # Set the driver parameters cd('/JDBCSystemResources/' + dsName + '/JDBCResource/' + dsName + '/JDBCDriverParams/' + dsName) set('Url',url) set('DriverName','oracle.jdbc.OracleDriver') set('PasswordEncrypted',password) cd('Properties/' + dsName) userprop=cmo.createProperty('user') userprop.setValue(user) # Set the connection pool parameters cd('/JDBCSystemResources/' + dsName + '/JDBCResource/' + dsName + '/JDBCConnectionPoolParams/' + dsName) set('TestTableName','SQL PINGDATABASE') # cannot remove member data sources until server restarted #cd('/') #delete(memberds1, 'JDBCSystemResource') #delete(memberds2, 'JDBCSystemResource') save() activate() exit() A customer was having problems setting multiple targets in a WLST script.  It's not limited to this topic but here's how it is done. targetlist = [] targetlist.append(ObjectName('com.bea:Name=myserver1,Type=Server')) targetlist.append(ObjectName('com.bea:Name=myserver2,Type=Server')) targets = array(targetlist, ObjectName) cd('/JDBCSystemResources/' + dsName) set('Targets',targets)          

In the original blog article on this topic at this link, I proposed that you delete the multi data source (MDS) and create a replacement Active GridLink (AGL) data source.  In the real world, the...

Docker Volumes in WebLogic

Background Information In the Docker world, containers are ephemeral; they can be destroyed and replaced. After a container is destroyed, it is gone and all the changes made to the container are gone. If you want to persist data which is independent of the container's lifecycle, you need to use volumes. Volumes are directories that exist outside of the container file system. Docker Data Volume Introduction This blog provides a generic introduction to Docker data volumes and is based on a WebLogic Server 12.2.1.3 image. You can build the image using scripts in github. In this blog, this base image is used only to demonstrate the usage of data volumes; no WebLogic Server instance is actually running. Instead, it uses the 'sleep 3600' command to keep the container running for 6 minutes and then stop. Local Data Volumes Anonymous Data Volumes For an anonymous data volume, a unique name is auto-generated internally. Two ways to create anonymous data volumes are: Create or run a container with '-v /container/fs/path' in docker create or docker run Use the VOLUME instruction in Dockerfile: VOLUME ["/container/fs/path"]   $ docker run --name c1 -v /mydata -d weblogic-12.2.1.3-developer 'sleep 3600' $ docker inspect c1 | grep Mounts -A 10 "Mounts": [ { "Name": "625672972c137c2e29b85f1e6ae70d6d4eb756002062792721a5ac0e9d0f1421", "Source": "/scratch/docker/volumes/625672972c137c2e29b85f1e6ae70d6d4eb756002062792721a5ac0e9d0f1421/_data", "Destination": "/mydata", "Driver": "local", "Mode": "", "RW": true, "Propagation": "" } ],   # now we know that the volume has a random generated name 625672972c137c2e29b85f1e6ae70d6d4eb756002062792721a5ac0e9d0f1421 $ docker volume inspect 625672972c137c2e29b85f1e6ae70d6d4eb756002062792721a5ac0e9d0f1421 [ { "Name": "625672972c137c2e29b85f1e6ae70d6d4eb756002062792721a5ac0e9d0f1421", "Driver": "local", "Mountpoint": "/scratch/docker/volumes/625672972c137c2e29b85f1e6ae70d6d4eb756002062792721a5ac0e9d0f1421/_data", "Labels": null, "Scope": "local" } ] Named Data Volumes Named data volumes are available in Docker 1.9.0 and later. Two ways to create named data volumes are: Use docker volume create --name volume_name Create or run a container with '-v volume_name:/container/fs/path' in docker create or docker run $ docker volume create --name testv1 $ docker volume inspect testv1 [ { "Name": "testv1", "Driver": "local", "Mountpoint": "/scratch/docker/volumes/testv1/_data", "Labels": {}, "Scope": "local" } ] Mount Host Directory or File You can mount an existing host directory to a container directly.  To mount a host directly when running a container: Create or run a container with '-v /host/path:/container/path' in docker create or docker run You can mount an individual host file in the same way: Create or run a container with '-v /host/file:/container/file' in docker create or docker run Note that the mounted host directory or file is not an actual data volume managed by Docker so it is not shown when running docker volume ls. Also, you cannot mount a host directory or file in Dockerfile. $ docker run --name c2 -v /home/data:/mydata -d weblogic-12.2.1.3-developer 'sleep 3600' $ docker inspect c2 | grep Mounts -A 8 "Mounts": [ { "Source": "/home/data", "Destination": "/mydata", "Mode": "", "RW": true, "Propagation": "rprivate" } ], Data Volume Containers Data volume containers are data-only containers. After a data volume container is created, it doesn't need to be started. Other containers can access the shared data using --volumes-from. # step 1: create a data volume container 'vdata' with two anonymous volumes $ docker create -v /vv/v1 -v /vv/v2 --name vdata weblogic-12.2.1.3-developer # step 2: run two containers c3 and c4 with reference to the data volume container vdata $ docker run --name c3 --volumes-from vdata -d weblogic-12.2.1.3-developer 'sleep 3600' $ docker run --name c4 --volumes-from vdata -d weblogic-12.2.1.3-developer 'sleep 3600' Data Volume Plugins Docker 1.8 and later support a volume plugin which can extend Docker with new volume drivers. You can use volume plugins to mount remote folders in a shared storage server directly, such as iSCSI, NFS, or FC. The same storage can be accessed, in the same manner, from another container running in another host. Containers in different hosts can share the same data. There are plugins available for different storage types. Refer to the Docker documentation for volume plugins: https://docs.docker.com/engine/extend/legacy_plugins/#volume-plugins.  WebLogic Persistence in Volumes When running WebLogic Server in Docker, there are basically two use cases for using data volumes: To separate data from the WebLogic Server lifecycle, so you can reuse the data even after the WebLogic Server container is destroyed and later restarted or moved. To share data among different WebLogic Server instances, so they can recover each other's data, if needed (service migration). The following WebLogic Server artifacts are candidates for using data volumes: Domain home folders Server logs Persistent file stores for JMS, JTA, and such. Application deployments Refer to the following table for the data stored by WebLogic Server subsystems. Subsystem or Service What It Stores More Information Diagnostic Service Log records, data events, and harvested metrics Understanding WLDF Configuration in Configuring and Using the Diagnostics Framework for Oracle WebLogic Server JMS Messages Persistent messages and durable subscribers Understanding the Messaging Models in Developing JMS Applications for Oracle WebLogic Server JMS Paging Store One per JMS server. Paged persistent and non-persistent messages. Main Steps for Configuring Basic JMS System Resources in Administering JMS Resources for Oracle WebLogic Server  JTA Transaction Log (TLOG) Information about committed transactions, coordinated by the server, that may not have been completed. TLOGs can be stored in the def