X

Proactive insights, news and tips from Oracle WebLogic Server Support. Learn Oracle from Oracle.

  • December 12, 2017

Best Practices for Application Deployment on WebLogic Server Running on Kubernetes

Overview

WebLogic Server and Kubernetes each provide a rich set of features to support application deployment. As part of the process of certifying WebLogic Server on Kubernetes, we have identified a set of best practices for deploying Java EE applications on WebLogic Server instances that run in Kubernetes and Docker environments. This blog describes those best practices. They include the general recommendations described in Deploying Applications to Oracle WebLogic Server, and also include the application deployment features provided in Kubernetes.

Application Deployment Terminology 

Both WebLogic Server and Kubernetes use similar terms for resources they manage, but with different meanings. For example, the notion of application or deployment has slightly different meaning, which can create confusion. The table below lists key terms that are used in this blog and how they are defined differently in WebLogic Server and Kubernetes. See Kubernetes Reference Documentation for a standardized glossary with a list of Kubernetes terminology.

Table 1 Application Deployment Terminology

  WebLogic Server Kubernetes
Application

A Java EE application (an enterprise application or Web application) or a standalone Java EE module

(such as an EJB or resource adapter) that has been organized according to the Java EE specification.

An application unit includes a web application, enterprise application, enterprise javaBean, resource adapter, web service, Java EE library or an optional package.

An application unit may also include JDBC, JMS, WLDF modules or a client application archive.

A software that is containerized and managed in a cluster environment by Kubernetes. WebLogic Server is an example of a Kubernetes application.
Application Deployment

A process of making a Java Enterprise Edition (Java EE) application or module available for processing client requests in WebLogic Server.

A way of packaging, instantiating, running and communicating the containerized applications in a cluster environment.

Kubernetes also has an API object called a Deployment that manages a replicated application.

Deployment Tool
  • weblogic.Deployer utility
  • Administration Console
  • WebLogic Scripting Tool (WLST)
  • wldeploy Ant task
  • weblogic-maven-plugin Maven plug-in
  • WebLogic Deployment API
  • Auto-deployment feature
  • kubeadm
  • kubectl
  • minikube
  • Helm Chart
  • kops
Cluster A WebLogic cluster consists of multiple WebLogic Server instances running simultaneously and working together to provide increased scalability and reliability. A cluster appears to clients to be a single WebLogic Server instance. The server instances that constitute a cluster can run on the same machine, or be located on different machines. You can increase a cluster's capacity by adding additional server instances to the cluster on an existing machine, or you can add machines to the cluster to host the incremental server instances. Each server instance in a cluster must run the same version of WebLogic Server.

A Kubernetes cluster consists of a master node and a set of worker nodes. In a production environment these run in a distributed setup on multiple nodes. For testing purposes, all the components can run on the same node (either a physical host or a virtual machine).

Within the context of this blog, the following definitions are used:
  • The application mentioned in this page is the Java EE application. 
  • The application deployment in this page is the Java EE application deployment on WebLogic Server. 
  • A Kubernetes application is the software managed by Kubernetes. For example, a WebLogic Server.

Summary of Best Practices for Application Deployment in Kubernetes

In this blog, the best practices for application deployment on WebLogic Server running in Kubernetes includes several parts:

  • Distributing Java EE application deployment files to a Kubernetes environment so the WebLogic Server containers in pods can access the deployment files.
  • Deploying Java EE applications in a Kubernetes environment so the applications are available for the WebLogic Server containers in pods to process the client requests.
  • Integrating Kubernetes applications with the ReadyApp framework to check the Kubernetes applications' readiness status.

General Java EE Application Deployment Best Practices Overview

Before drilling down into the best practices details, let’s briefly review the general Java EE application deployment best practices, which are described in Deploying Applications to Oracle WebLogic Server.

The general Java EE application deployment process involves multiple parts, mainly:

  1. Preparing the Java EE application or module. See Preparing Applications and Modules for Deployment, including Best Practices for Preparing Deployment Files.
  2. Configuring the Java EE application or module for deployment. See Configuring Applications for Production Deployment, including Best Practices for Managing Application Configuration.
  3. Exporting the Java EE application or module for deployment to a new environment. See Exporting an Application for Deployment to New Environments, including Best Practices for Exporting a Deployment Configuration.
  4. Deploying the Java EE application or module. See Deploying Applications and Modules with weblogic.Deployer, including Best Practices for Deploying Applications.
  5. Redeploying the Java EE application or module. See Redeploying Applications in a Production Environment.

Distributing Java EE Application Deployment Files in Kubernetes

Assume the WebLogic Server instances have been deployed into Kubernetes and Docker environments. Before you deploy the Java EE applications on WebLogic Server instances, the Java EE application deployment files, for example, the EAR, WAR, RAR files,  need to be distributed to the locations that can be accessed by the WebLogic Server instances in the pods. In Kubernetes, the deployment files can be distributed by means of a Docker images, or manually by an administrator.

Pre-distribution of Java EE Applications in Docker Images

A Docker image can contain a pre-built WebLogic Server domain home directory that has one or more Java EE applications deployed to it. When the containers in the pods are created and started using the same Docker image, all containers should have the same Java EE applications deployed to them.

If the Java EE applications in the Docker image are updated to a newer version, a new Docker image can be created on top of the current existing Docker image, as shown in Figure 1.

However as newer application versions are introduced, additional layers are needed in the image, which consumes more resources, such as disk space. Consequently, having an excessive number of layers in the Docker image is not recommended.

Figure 1 Pre-distribution of Java EE Application in layered Docker Images

Using Volumes in a Kubernetes Cluster

Application files can be shared among all the containers in all the pods by mapping the application volume directory in the pods to an external directory on the host. This makes the application files accessible to all the containers in the pods. When using volumes, the application files need to be copied only once to the directory on the host. There is no need to copy the files to each pod. This saves disk space and the deployment time especially for large applications. Using volumes is recommended for distributing the Java EE applications to WebLogic Server instances running in Kubernetes.

Figure 2 Mounting Volumes to an External Directory

As shown in Figure 2, every container in each of the three pods has an application volume directory /shared/applications. Each of these directories is mapped to the same external directory on the host: /host/apps. After the administrator puts the application file simpleApp.war in the /host/apps directory on the host, this file can then be accessed by the containers in each pod from the /shared/applications directory.

Note that Kubernetes supports different volume types. For information about determining the volume type to use, creating the volume directory, determining the medium that backs it, and identifying the contents of the volume, see Volumes in the Kubernetes documentation.

Best Practices for Distributing Java EE Application Deployment Files in Kubernetes

  • Use volumes to persist and share the application files across the containers in all pods.
  • On-disk files in a container are ephemeral. When using a pre-built WebLogic Server domain home in a Docker image, use a volume to store the domain home directory on the host. A sample WebLogic domain wls-k8s-domain that includes a pre-built WebLogic Server domain home directory is available from GitHub at https://github.com/oracle/docker-images/tree/master/OracleWebLogic/samples/wls-k8s-domain
  • Store the application files in a volume whose location is separate from the domain home volume directory on the host.
  • A deployment plan generated for an existing Java EE web application that is deployed to WebLogic Server can be stored in a volume as well. For more details about using the deployment plan,  see the tutorial at http://www.oracle.com/webfolder/technetwork/tutorials/obe/fmw/wls/12c/09-DeployPlan--4464/deployplan.htm.
  • By default all processes in WebLogic Server pods are running with user ID 1000 and group ID 1000. Make sure that the proper access permissions are set to the application volume directory so that user ID 1000 or group ID 1000 has read and write access to the application volume directory.

Java EE Application Deployment in Kubernetes

After the application deployment files are distributed throughout the Kubernetes cluster, you have several WebLogic Server deployment tools to choose from for deploying the Java EE applications to the containers in the pods.

WebLogic Server supports the following deployment tools for deploying, undeploying and redeploying the Java EE applications:

  • WebLogic Administration Console
  • WebLogic Scripting Tool (WLST)
  • weblogic.Deployer utility
  • REST API
  • wldeploy Ant task
  • The WebLogic Deployment API which allows you to perform deployment tasks programmatically using Java classes.
  • The auto-deployment feature. When auto-deployment is enabled, copying an application into the /autodeploy directory of the Administration Server causes that application to be deployed automatically. Auto-deployment is intended for evaluation or testing purposes in a development environment only

For more details about using these deployment tools, see Deploying Applications to Oracle WebLogic Server.

These tools can also be used in Kubernetes. The following samples show multiple ways to deploy and undeploy an application simpleApp.war in a WebLogic cluster myCluster. 

  • Using WLST in a Dockerfile
  • Using the weblogic.Deployer utility
  • Using the REST API

Note that the environment in which the deployment command is run is created based upon the sample WebLogic domain wls-k8s-domain available on GitHub at  https://github.com/oracle/docker-images/tree/master/OracleWebLogic/samples/wls-k8s-domain.

In this environment, 

  • A sample WLS 12.2.1.3 domain and cluster are created by extending the Oracle WebLogic developer install image and running it in Kubernetes. The WebLogic domain (for example base_domain) consists of an Admininstrator Server and several Managed Servers running in the WebLogic cluster myCluster. Each WebLogic Server is started in a container. Each pod has one WebLogic Server container. For details about the wls-k8s-domain sample, see the GitHub page.
  • Each pod has one domain home volume directory (for example /u01/wlsdomain). This domain home volume directory is mapped to an external directory (for example /host/domain). The sample WLS 12.2.1.3 domain is created under this external directory.
  • Each pod can have an application volume directory (for example /shared/applications) created in the same way as the domain home volume directory. This application volume directory is mapped to an external directory (for example /host/apps). The Java EE applications can be distributed to this external directory.

Sample of Using Offline WLST in a Dockerfile to Deploy a Java EE Application

In this sample, a Dockerfile is used for building an application Docker image. This application Docker image extends a wls-k8s-domain image that creates the sample wls-k8s-domain domain. This Dockerfile also calls WLST with a py script to update the sample wls-k8s-domain domain configuration with a new application deployment in a offline mode.

# Dockerfile
# Extends wls-k8s-domain
FROM wls-k8s-domain

# Copy the script files and call a WLST script.
COPY container-scripts/* /u01/oracle/

# Run a py script to add a new application deployment into the domain configuration
RUN wlst /u01/oracle/app-deploy.py

 The script app-deploy.py is called to deploy the application simpleApp.war using the Offline WLST apis:

# app-deploy.py
# Read the domain
readDomain(domainhome)

# Create application
# ==================
cd('/')
app = create('simpleApp', 'AppDeployment')
app.setSourcePath('/shared/applications/simpleApp.war')
app.setStagingMode('nostage')

# Assign application to cluster
# =================================
assign('AppDeployment', 'simpleApp, 'Target', 'myCluster')

# Update domain. Close It. Exit
# =================================
updateDomain()
closeDomain()
exit()

The application is deployed during the application Docker image build phase. When a WebLogic Server container is started, the simpleApp application is started and is ready to service the client requests.

Sample of Using weblogic.Deployer to Deploy and Undeploy a Java EE Application in Kubernetes

In this sample, the application simpleApp.war exists in the external directory: /host/apps which is located on the host as described in the prior section: Using External Volumes in Kubernetes Cluster.

The following commands show running the webloigc.Deployer utility in the Adminstration Server pod:

# Find the pod id for Admin Server pod: admin-server-1238998015-f932w
> kubectl get pods
  NAME READY STATUS RESTARTS AGE
  admin-server-1238998015-f932w 1/1 Running 0 11m
  managed-server-0 1/1 Running 0 11m
  managed-server-1 1/1 Running 0 8m

# Find the Admin Server service name that can be connected to from the deployment command. 
# Here the Admin Server service name is admin-server which has a port 8001.
> kubectl get services
  NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  admin-server 10.102.160.123 <nodes> 8001:30007/TCP 11m
  kubernetes 10.96.0.1 <none> 443/TCP 39d
  wls-service 10.96.37.152 <nodes> 8011:30009/TCP 11m
  wls-subdomain None <none> 8011/TCP 11m

# Execute the /bin/bash in the Admin Server pod
> kubectl exec -it admin-server-1238998015-f932w /bin/bash

# Once in the Admin Server pod, setup a WebLogic env, then run weblogic.Deployer 
# to deploy the simpleApp.war located in the /shared/applications directory to 
# the cluster "myCluster"
]$ cd /u01/wlsdomain/base_domain/bin
]$ . setDomainEnv.sh
]$ java weblogic.Deployer -adminurl t3://admin-server:8001 -user weblogic -password weblogic1 -name simpleApp -targets myCluster -deploy /shared/applications/simpleApp.war

The next command verifies that the Java EE application deployment to the WebLogic cluster is completed successfully:

# Kubernetes routes the traffic to both managed-server-0 and managed-server-1 via the wls-service port 30009.
http://<hostIP>:30009/simpleApp/Scrabble.jsp

The following command uses the weblogic.Deployer utility to undeploy the application. Note its similarity to the steps for deployment:

# Execute the /bin/bash in the Admin Server pod
> kubectl exec -it admin-server-1238998015-f932w /bin/bash

# Undeploy the simpleApp
]$ cd /u01/wlsdomain/base_domain/bin
]$ . setDomainEnv.sh
]$ java weblogic.Deployer -adminurl t3://admin-server:8001 -user weblogic -password weblogic1 -undeploy -name simpleApp

Sample of Using REST APIs to Deploy and Undeploy a Java EE Application in Kubernetes

In this sample, the application simpleApp.war has already been distributed to the host directory /host/apps. This host directory, in turn, mounts to the application volume directory /shared/applications, which is in the pod admin-server-1238998015-f932w.

The following example shows executing a curl command in the pod admin-server-1238998015-f932w. This curl command sends a REST request to the Adminstration Server using NodePort 30007 to deploy the simpleApp to the WebLogic cluster myCluster.

# deploy simpleApp.war file to the WebLogic cluster
> kubectl exec admin-server-1238998015-f932w -- curl -v --user weblogic:weblogic1 \
          -H X-Requested-By:MyClient \
          -H Content-Type:application/json \
          -d "{ name: 'simpleApp', \
                sourcePath: '/shared/applications/simpleApp.war', \
                targets: [ { identity: [ 'clusters', 'myCluster' ] } ] }" \
          -X POST http://<hostIP>:30007/management/weblogic/latest/edit/appDeployments

The following command uses the REST API to undeploy the application:

# undeploy simpleApp.war file from the WebLogic cluster
> kubectl exec admin-server-1238998015-f932w -- curl -v --user weblogic:weblogic1 \
          -H X-Requested-By:MyClient \
          -H Accept:application/json \
          -X DELETE http://<hostIP>:30007/management/wls/latest/deployments/application/id/simpleApp

Best Practices for Deploying Java EE Applications in Kubernetes

  • Deploy Java EE applications or modules to a WebLogic cluster, instead of individual WebLogic Server instances. This simplifies scaling the WebLogic cluster later because changes to deployment strategy are not necessary.
  • WebLogic Server deployment tools can be used in the Kubernetes environment.
  • When updating an application, follow the same steps as described above to distribute and deploy the application.
  • When using a pre-built WebLogic Server domain home in a Docker image, deploying the applications to the domain automatically updates the domain configuration. However deploying applications this way results in the domain configuration in the pods to become out-of-sync with the domain configuration in the Docker image. You can avoid this synchronization issue whenever possible by including the required applications in the pre-built domain home in the Docker image. This way you can avoid extra deployment steps later on.

Integrating ReadyApp Framework in Kubernetes Readiness Probe

Kubernetes provides a flexible approach to configuring load balancers and frontends in a way that isolates clients from the details of how services are deployed. As part of this approach, Kubernetes performs and reacts to a readiness probe to determine when a container is ready to accept traffic.

By contrast, WebLogic Server provides the ReadyApp framework, which reports whether the WebLogic Server instance startup is completed and ready to service client requests.

The ReadyApp framework uses two states: READY and NOT READY. The READY state means that not only is a WebLogic Server instance in a RUNNING state, but also that all applications deployed on the WebLogic Server instance are ready to service requests. When in the NOT READY state, the WebLogic Server instance startup is incomplete and is unable to accept traffic.

When starting a WebLogic Server container in a Kubernetes environment, you can use a Kubernetes readiness probe to access the ReadyApp framework on WebLogic Server. When the ReadyApp framework reports a READY state of a WebLogic Server container startup, the readiness probe notifies Kubernetes that the traffic to the WebLogic Server container may begin.

The following example shows how to use the ReadyApp framework integrated in a readiness probe to determine whether a WebLogic Server container running on the port 8011 is ready to accept traffic.

apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
[...]
spec:
  [...]
  template:
    [...]
    spec:
      containers:
        [...]
        readinessProbe:
          failureThreshold: 3
          httpGet:
          path: /weblogic/ready
          port: 8011
          scheme: HTTP
[...]

The ReadyApp framework on WebLogic Server can be accessed from the URL: http://<hostIP>:<port>/weblogic/ready When WebLogic Server is running, this URL returns a page with either a status 200 (READY) or 503 (NOT READY). When WebLogic Server is not running, an Error 404 page appears. 

Similar to WebLogic Server, other Kubernetes applications can register with the ReadyApp framework and use a readiness probe to check the state of the ReadyApp framework on the Kubernetes applications. See Using the ReadyApp Framework for information about how to register an application with the ReadyApp framework. 

Best Practices for Integrating ReadyApp Framework in Kubernetes Readiness Probe

The use of the ReadyApp framework to register Kubernetes applications and also of the readinessProbe to check the status of the ReadyApp framework to determine whether the applications are ready to service requests, is recommended. Only when the status of the ReadyApp framework is in a READY state, Kubernetes routes traffic to those Kubernetes applications.

Conclusion

When integrating WebLogic Server in Kubernetes and Docker environments, customers can use the existing powerful WebLogic Server deployment tools to deploy their Java EE applications onto WebLogic Server instances running in Kubernetes. Customers also can use Kubernetes features to manage WebLogic Server: they can use volumes to share the application files with all the containers among all pods in a Kubernetes cluster, and also of the readinessProbe to monitor WebLogic Server startup state, and more. This integration not only allows customers to support flexible deployment scenarios that fit into their company's business practices, but also provides ways to quickly deploy WebLogic Server in a cloud environment, to autoscale it on the fly, and to update it seamlessly.

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.