X

Proactive insights, news and tips from Oracle WebLogic Server Support. Learn Oracle from Oracle.

Voyager/HAProxy as Load Balancer to Weblogic Domains in Kubernetes

Overview

Load balancing is a widely-used technology to build scalable and resilient applications. The major function of load balancing is to monitor servers and distribute network traffic among multiple servers, for example, web applications, databases. For containerized applications running on Kubernetes, load balancing is also a necessity. In the WebLogic Server on Kubernetes Operator version 1.0 we have added support for Voyager/HAProxy. We enhanced the script create-weblogic-domain.sh to provide out-of-the-box support for Voyager/HAProxy. The script supports load balancing to servers of a single WebLogic domain/cluster. This blog describes how to configure Voyager/HAProxy to expand load balancing support to applications deployed to multiple WebLogic domains in Kubernetes.

Basics of Voyager/HAProxy

If you are new to HAProxy and Voyager, it's worth spending some time learning the basics of HAProxy and Voyager.

HAProxy is free, open source software that provides a high availability load balancer and proxy server for TCP and HTTP-based applications. It's well known for being fast and efficient (in terms of processor speed and memory usage). See Starter Guide of HAProxy.

Voyager is a HAProxy backed Ingress controller (refer to the Kubernetes documents about Ingress). After installed in a Kubernetes cluster, the Voyager operator watches for Kubernetes Ingress resources and Voyager’s own Ingress CRD and automatically creates, updates, and deletes HAProxy instances accordingly. See voyager overview to understand how the Voyager operator works.

Running WebLogic Domains in Kubernetes

Check out the project wls-operator-quickstart from GitHub to your local environment. This project helps you set up WebLogic Operator and domains with minimal manual steps. Please complete the steps in the 'Pre-Requirements' section of the README to set up your local environment.

With the help of the wls-operator-quickstart project, we want to set up two WebLogic domains running on Kubernetes using the WebLogic Operator, each in its own namespace:

  • The domain named 'domain1' is running in the namespace 'default' which has one cluster 'cluster-1' and the cluster contains two Managed Servers, 'domain1-managed-server1' and 'domain1-managed-server2'.
  • The domain named 'domain2' is running in the namespace 'test1' which has one cluster 'cluster-1' and the cluster contains two Managed Servers, 'domain2-managed-server1' and 'domain2-managed-server2'.
  • A web application 'testwebapp.war' is deployed separately to the cluster in both domain1 and domain2. This web application has a default page which displays the information about which the Managed Server is processing the HTTP request.

Use the following steps to prepare the WebLogic domains which are the back ends to the HAProxy:

# change directory to root folder of wls-operator-quickstart
$ cd xxx/wls-operator-quickstart

# Build and deploy weblogic operator
$ ./operator.sh create

# Create domain1. Change value of `loadBalancer` to `NONE` in domain1-inputs.yaml before run.
$ ./domain.sh create

# Create domain2. Change value of `loadBalancer` to `NONE` in domain2-inputs.yaml before run.
$ ./domain.sh create -d domain2 -n test1

# Install Voyager
$ kubectl create namespace voyager
$ curl -fsSL https://raw.githubusercontent.com/appscode/voyager/6.0.0/hack/deploy/voyager.sh \
    | bash -s -- --provider=baremetal --namespace=voyager

Check the status of the WebLogic domains, as follows:

# Check status of domain1
$ kubectl get all
NAME                          READY     STATUS    RESTARTS   AGE
pod/domain1-admin-server      1/1       Running   0          5h
pod/domain1-managed-server1   1/1       Running   0          5h
pod/domain1-managed-server2   1/1       Running   0          5h

NAME                                                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)           AGE
service/domain1-admin-server                        NodePort    10.105.135.58    <none>        7001:30705/TCP    5h
service/domain1-admin-server-extchannel-t3channel   NodePort    10.111.9.15      <none>        30015:30015/TCP   5h
service/domain1-cluster-cluster-1                   ClusterIP   10.108.34.66     <none>        8001/TCP          5h
service/domain1-managed-server1                     ClusterIP   10.107.185.196   <none>        8001/TCP          5h
service/domain1-managed-server2                     ClusterIP   10.96.86.209     <none>        8001/TCP          5h
service/kubernetes                                  ClusterIP   10.96.0.1        <none>        443/TCP           5h

# Verify web app in domain1 via running curl on admin server pod to access the cluster service
$ kubectl -n default exec -it domain1-admin-server -- curl http://domain1-cluster-cluster-1:8001/testwebapp/

# Check status of domain2
$ kubectl -n test1 get all
NAME                          READY     STATUS    RESTARTS   AGE
pod/domain2-admin-server      1/1       Running   0          5h
pod/domain2-managed-server1   1/1       Running   0          5h
pod/domain2-managed-server2   1/1       Running   0          5h

NAME                                                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)           AGE
service/domain2-admin-server                        NodePort    10.97.77.35      <none>        7001:30701/TCP    5h
service/domain2-admin-server-extchannel-t3channel   NodePort    10.98.239.28     <none>        30012:30012/TCP   5h
service/domain2-cluster-cluster-1                   ClusterIP   10.102.228.204   <none>        8001/TCP          5h
service/domain2-managed-server1                     ClusterIP   10.96.59.190     <none>        8001/TCP          5h
service/domain2-managed-server2                     ClusterIP   10.101.102.102   <none>        8001/TCP          5h

# Verify the web app in domain2 via running curl in admin server pod to access the cluster service
$ kubectl -n test1 exec -it domain2-admin-server -- curl http://domain2-cluster-cluster-1:8001/testwebapp/

After both WebLogic domains are running on Kubernetes, I will demonstrate two approaches that use different HAProxy features to set up Voyager as a single entry point to the two WebLogic domains.

Using Host Name-Based Routing

Create the Ingress resource file 'voyager-host-routing.yaml' which contains an Ingress resource using host name-based routing.

apiVersion: voyager.appscode.com/v1beta1
kind: Ingress
metadata:
  name: hostname-routing 
  namespace: default
  annotations:
    ingress.appscode.com/type: 'NodePort'
    ingress.appscode.com/stats: 'true'
    ingress.appscode.com/affinity: 'cookie'
spec:
  rules:
  - host: domain1.org
    http:
      nodePort: '30305'
      paths:
      - backend:
          serviceName: domain1-cluster-cluster-1
          servicePort: '8001'
  - host: domain2.org
    http:
      nodePort: '30305'
      paths:
      - backend:
          serviceName: domain2-cluster-cluster-1.test1
          servicePort: '8001'

Then deploy the YAML file using`kubectl create -f voyager-host-routing.yaml`.

Testing Load Balancing with Host Name-Based Routing

To make host name-based routing work, you need to set up virtual hosting which usually involves DNS changes. For demonstration purposes, we will use curl commands to simulate load balancing with host name-based routing.

# Verify load balancing on domain1
$ curl --silent -H 'host: domain1.org' http://${HOSTNAME}:30305/testwebapp/ | grep InetAddress.hostname
    <li>InetAddress.hostname: domain1-managed-server1
$ curl --silent -H 'host: domain1.org' http://${HOSTNAME}:30305/testwebapp/ | grep InetAddress.hostname
    <li>InetAddress.hostname: domain1-managed-server2

# Verify load balancing on domain2
$ curl --silent -H 'host: domain2.org' http://${HOSTNAME}:30305/testwebapp/ | grep InetAddress.hostname
    <li>InetAddress.hostname: domain2-managed-server1
$ curl --silent -H 'host: domain2.org' http://${HOSTNAME}:30305/testwebapp/ | grep InetAddress.hostname
    <li>InetAddress.hostname: domain2-managed-server2

The result is:

  • If host name 'domain1.org' is specified, the request will be processed by Managed Servers in domain1.
  • If host name 'domain2.org' is specified, the request will be processed by Managed Servers in domain2.

Using Path-Based Routing and URL Rewriting

In this section we use path-based routing with URL rewriting to achieve the same behavior as host name-based routing.

Create the Ingress resource file 'voyager-path-routing.yaml'.

apiVersion: voyager.appscode.com/v1beta1
kind: Ingress
metadata:
  name: path-routing
  namespace: default
  annotations:
    ingress.appscode.com/type: 'NodePort'
    ingress.appscode.com/stats: 'true'
    ingress.appscode.com/rewrite-target: "/testwebapp"
spec:
  rules:
  - host: '*'
    http:
      nodePort: '30307'
      paths:
      - path: /domain1
        backend:
          serviceName: domain1-cluster-cluster-1
          servicePort: '8001'
      - path: /domain2
        backend:
          serviceName: domain2-cluster-cluster-1.test1
          servicePort: '8001'

Then deploy the YAML file using `kubectl create -f voyager-path-routing.yaml`.

Verify Load Balancing with Path-Based Routing

To verify the load balancing result, we use the curl command. Another approach is to access the URL from a web browser directly. 

# Verify load balancing on domain1
$ curl --silent http://${HOSTNAME}:30307/domain1/ | grep InetAddress.hostname
    <li>InetAddress.hostname: domain1-managed-server1
$ curl --silent http://${HOSTNAME}:30307/domain1/ | grep InetAddress.hostname
    <li>InetAddress.hostname: domain1-managed-server2

# Verify load balancing on domain2
$ curl --silent http://${HOSTNAME}:30307/domain2/ | grep InetAddress.hostname
    <li>InetAddress.hostname: domain2-managed-server1
$ curl --silent http://${HOSTNAME}:30307/domain2/ | grep InetAddress.hostname
    <li>InetAddress.hostname: domain2-managed-server2

You can see that we specify different URLs to dispatch traffic to different WebLogic domains with path-based routing. With the URL rewriting feature, we eventually access the web application with the same context path in each domain.

Cleanup

After you finish your exercise using the instructions in this blog, you may want to clean up all the resources created in Kubernetes. 

# Cleanup voyager ingress resources
$ kubectl delete -f voyager-host-routing.yaml
$ kubectl delete -f voyager-path-routing.yaml
 
# Uninstall Voyager
$ curl -fsSL https://raw.githubusercontent.com/appscode/voyager/6.0.0/hack/deploy/voyager.sh \
      | bash -s -- --provider=baremetal --namespace=voyager --uninstall --purge

# Delete wls domains and wls operator
$ cd <QUICKSTART_ROOT>
$ ./domain.sh delete --clean-all
$ ./domain.sh delete -d domain2 -n test1 --clean-all
$ ./operator.sh delete

Summary

In this blog, we describe how to set up a Voyager load balancer to provide high availability load balancing and a proxy server for TCP and HTTP-based requests to applications deployed in WebLogic Server domains. The samples provided in this blog describe how to use Voyager as a single point in front of multiple WebLogic domains. We provide examples to show you how to use Voyager features like host name-based routing, path-based routing, and URL rewriting. I hope you find this blog helpful and try using Voyager in your WebLogic on Kubernetes deployments.

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.