Proactive insights, news and tips from Oracle WebLogic Server Support. Learn Oracle from Oracle.

  • October 16, 2017

Let WebLogic work with Elastic Stack in Kubernetes

Over the past decade, there has been a big change in application development, distribution, and deployment. More and more popular tools have become available to meet the requirements. Some of the tools that you may want to use are provided in the Elastic Stack. In this article, we'll show you how to integrate them with WebLogic Server in Kubernetes.

Note: You can find the code for this article at https://github.com/xiumliang/Weblogic-ELK.

What Is the Elastic Stack?

The Elastic Stack consists of several products: Elasticsearch, Logstash, Kibana, and others. Using the Elastic Stack, you can gain insight from your application's log data, in real time.

Elasticsearch is a distributed, RESTful search and analytics engine capable of solving a growing number of use cases. As the heart of the Elastic Stack, it stores your data centrally so you can discover the expected and uncover the unexpected.

Logstash is an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite “stash.”

Kibana lets you visualize your Elasticsearch data and navigate the Elastic Stack. It gives you the freedom to select how to shape your data. You don’t always have to know what you're looking for.

Let WebLogic Server Work with the Elastic Stack

There are several ways to use the Elastic Stack to gather and analyze WebLogic Server logs. In this article, we will introduce two of them.

Integrate the Elastic Stack with WebLogic Server by Using Shared Volumes

WebLogic Server instances put their logs into a shared volume. The volume could be an NFS or a host path. Logstash collects the logs from the volume and transfers the filtered logs to Elasticsearch. This type of integration requires a shared disk for the logs. The advantage is that the log files are stored and persisted, even after the WebLogic Server and Elastic Stack pods shutdown. Also, by using a shared volume, you do not have to consider the network configuration between Logstash and Elasticsearch. You just need deploy two pods (Elastic Stack and WebLogic Server), and use the shared volume. The disadvantage is that you need to maintain a shared volume for the pods; you must consider the disk space. In a multi-server environment, you need to arrange the logs on the shared volume so that there is no conflict between them.

Deploy a WebLogic Server Pod to Kubernetes

$ kubectl create -f k8s_weblogic.yaml

In this k8s_weblogic.yaml file, we've defined a shared volume of type 'hostPath'. When the pod starts, the WebLogic Server logs are written to the shared volume so Logstash can access them. We can change the volume type to NFS or another type supported by Kubernetes, but we must be careful about permissions. If the permission is not correct, the logs may not be written or read on the shared volume.

We can check if the pod is deployed and started:

$ kubectl get pods

We get the following:

NAME                      READY  STATUS   RESTARTS   AGE
weblogic-1725565574-fgmsr  1/1   Running     0       31s

Deploy an Elastic Stack Pod to Kubernetes

$ kubectl create -f k8s_elk.yaml

The K8s_elk.yaml file defines the shared volume, which is the same as the definition in the k8s_weblogic.yaml file, because both the WebLogic Server and the Elastic Stack pods mount the same shared volume, so that Logstash can read the logs.

Please note that Logstash is not started when the pod starts. We need to further configure Logstash before starting it.

After the Elastic Stack pod is started, we have two pods in the Kubernetes node:

NAME                      READY STATUS  RESTARTS AGE


weblogic-1725565574-fgmsr  1/1  Running    0     31s

elk-3823852708-zwbfg       1/1  Running    0     6m

Connect to the Pod and Verify the Elastic Stack Pods Started on the Pod Machine

$ kubectl exec -it elk-3823852708-zwbfg  /bin/bash

Run the following command to verify that Elasticstash has started.

$ curl GET -i ""
$ curl GET -i ""

We get the following indices if Elasticstash was started:


Because Kibana is a web application, we verify Kibana by opening the following URL in a browser:


We get Kibana's welcome page. The port 31711 is the node port defined in the k8s_elk.yaml.

Configure Logstash

$ vim /opt/logstash/config/logstash.conf

In the logstash.conf file, the "input blcok" defines where Logstash gets the input logs. The "filer block" defines a simple rule for how to filter WebLogic Server logs. The "output block" transfers the Logstash filtered logs to the Elasticsearch address:port.

Start Logstash and Verify the Result

$ /opt/logstash/bin# ./logstash -f ../config/logstash.conf

After Logstash is started, open the browser and point to the Elasticsearch address:


Compared to the previous result, there is an additional line, logstash-2017.07.28, which indicates that Logstash has started and transferred logs to Elasticsearch. 

Also, we can try to access any WebLogic Server applications. Now the Elastic Stack can gather and process the logs.

Integrate Elastic Stack with WebLogic Server via the Network

In this approach, WebLogic Server and the Logstash agent are deployed in one pod, and Elasticsearch and Kibana are deployed in another pod. Because Logstash and Elasticsearch are not in the same pod, Logstash has to transfer data to Elasticsearch using an outside ip:port. For this type of integration, we need to configure the network for Logstash. The advantage is that we do not have to maintain a shared disk and arrange the log folders when using multiple WebLogic Server instances. The disadvantage is that we must add a Logstash for each WebLogic Server pod so that the logs can be collected.

Deploy Elasticsearch and Kibana to Kubernetes

$ kubectl create -f k8s_ek.yaml

The k8s_ek.yaml file is similar to the k8s_elk.yaml file. They use the same image. The difference is, k8s_ek.yaml set env "LOGSTASH_START = 0", which indicates that Logstash does not start when the container starts up. Also, k8s_ek.yaml does not define a port for Logstash. The Logstash port will be defined in the same pod with WebLogic Server.

We can verify the ek startup with:


Generate the Logstash Configuration with EK Pod IP Address

$ kubectl describe pod ek-3905776065-4rmdx

We get the following information:

Name:         ek-3905776065-4rmdx
Namespace:    liangz
Node:         [NODE_HOST_NAME]/
Start Time:   Thu, 02 Aug 2017 14:37:19 +0800
Labels:       k8s-app=ek
Annotations:  kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"liangz-cn","name":"ek-3905776065","uid":"09a30990-7296-11e7-bd24-0021f6e6a769","a...
Status:       Running

The IP address of the ek pod is []. We need to define the IP address in the Logstash.conf file.

Create the Logstash.conf file in the shared volume where we need to locate it:

input {
  file {
    path => "/shared-logs/*.log*"
    start_position => beginning
filter {
  grok {
    match => [ "message", "<%{DATA:log_timestamp}> <%{WORD:log_level}> <%{WORD:thread}> <%{HOSTNAME:hostname}> <%{HOSTNAME:servername}> <%{DATA:timer}> <<%{DATA:kernel}>> <> <%{DATA:uuid}> <%{NUMBER:timestamp}> <%{DATA:misc}> <%{DATA:log_number}> <%{DATA:log_message}>" ]
output {
  elasticsearch {
    hosts => [""]

We will define two VolumeMounts in the Logstash-WebLogic Server pod:

Log path: /shared-logs

For the WebLogic Server instance which shared its logs with Logstash in the same pod

conf path: /shared-conf

For Logstash which uses the Logstash.config file

The Logstash.conf file defines the input file path to /shared-logs. Also, it connects to Elasticsearch with "" which we discovered previously.

Deploy the Logstash and WebLogic Server Pod to Kubernetes

$ kubectl create -f k8s_logstash_weblogic.yaml

In this k8s_logstash_weblogic.yaml, we add two images (WebLogic Server and Logstash). They share WebLogic Server logs with a pod-level shared volume, "shared-logs". This is a benefit of defining WebLogic Server and Logstash together. We do not need an NFS. If we want to deploy the pod to more nodes, we just need to modify the replicas value. All the new pods will have their own pod-level shared volume. We do not have to consider a possible conflict between the logs.

$ kubectl get pods
NAME                         READY    STATUS    RESTARTS   AGE
ek-3905776065-4rmdx           1/1     Running      0       6m
logstash-wls-38554443-n366v   2/2     Running      0       14s

Verify the Result

Open the following URL:


The first line shows us that Logstash has collected the logs and transferred them to Elasticsearch.

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.