As part of certifying WebLogic Server on Kubernetes, we have identified best practices for sharing file data among WebLogic Server pods that are running in a Kubernetes environment. In this blog, I review the WebLogic Server services and files that are typically configured to leverage shared storage, and I provide full end-to-end samples, which you can download and run, that show mounting shared storage for a WebLogic domain that is orchestrated by Kubernetes.
WebLogic Server Persistence in Volumes
When running WebLogic Server on Kubernetes, refer to the blog Docker Volumes in WebLogic for information about the advantages of using data volumes. This blog also identifies the WebLogic Server artifacts that are good candidates for being persisted in those data volumes.
Kubernetes Solutions
In a Kubernetes environment, pods are ephemeral. To persist data, Kubernetes provides the Volume abstraction, and the PersistentVolume (PV) and PersistentVolumeClaim (PVC) API resources.
Based on the official Kubernetes definitions [Kubernetes Volumes and Kubernetes Persistent Volumes and Claims], a PV is a piece of storage in the cluster that has been provisioned by an administrator, and a PVC is a request for storage by a user. Therefore, PVs and PVCs are independent entities outside of pods. They can be easily referenced by a pod for file persistence and file sharing among pods inside a Kubernetes cluster.
When running WebLogic Server on Kubernetes, using PVs and PVCs to handle shared storage is recommended for the following reasons:
- Usually WebLogic Server instances run in pods on multiple nodes that require access to a shared PV. The life cycle of a WebLogic Server instance is not limited to a single pod.
- PVs and PVCs can provide more control. For example, the ability to specify: access modes for concurrent read/write management, mount options provided by volume plugins, storage capacity requirements, reclaim policies for resources, and more.
Use Cases of Kubernetes Volumes for WebLogic Server
To see the details about the samples, or to run them locally, please download the examples and follow the steps provided below.
Software Versions
Host machine: Oracle Linux 7u3 UEK4 (x86-64)
Kubernetes v1.7.8
Docker 17.03 CE
Prepare Dependencies
- Build the oracle/weblogic:12.2.1.3-developer image locally based on the Dockerfile and scripts at https://github.com/oracle/docker-images/tree/master/OracleWebLogic/dockerfiles/12.2.1.3/.
- Download the WebLogic Kubernets domain sample source code from https://github.com/oracle/docker-images/tree/master/OracleWebLogic/samples/wls-k8s-domain.
Put the sample source code to a local folder named wls-k8s-domain.
- Build the WebLogic domain image locally based on the Dockerfile and scripts.
$ cd wls-k8s-domain $ docker build -t was-k8s-domain .
- For Use Case 2, below, prepare a NFS server and a shared directory by entering the following commands (in this example I use machine 10.232.128.232). Note that Use Case 1 uses a host path instead of NFS and does not require this step.
# systemctl start rpcbind.service # systemctl start nfs.service # systemctl start nfslock.service $ mkdir -p /scratch/nfsdata $ chmod o+rw /scratch/nfadata # echo /scratch/nfsdata *(rw,fsid=root,no_root_squash,no_subtree_check) >> /etc/exports
By default, in the WebLogic domain wls-k8s-domain, all processes in pods that contain WebLogic Server instances run with user ID 1000 and group ID 1000. Proper permissions need to be set to the external NFS shared directory to make sure that user ID 1000 and group ID 1000 have read and write permission to the NFS volume. To simplify the permissions management in the examples, we grant read and write permission to others to the shared directory as well.
Use Case 1: Host Path Mapping at Individual Machine with a Kubernetes Volume
The WebLogic domain consists of an Administration Server and multiple Managed Servers, each running inside its own pod. All pods have volumes directly mounted to a folder on the physical machine. The domain home is created in a shared folder when the Administration Server pod is first started.
At runtime, all WebLogic Server instances, including the Administration Server, share the same domain home directory via a mounted volume.
Note: This example runs on a single machine, or node, but this approach also works when running the WebLogic domain across multiple machines. When running on multiple machines, each WebLogic Server instance must share the same directory. In turn, the host path can refer to this directory, thus access to the volume is controlled by the underlying shared directory. Given a set of machines that are already set up with a shared directory, this approach is simpler than setting up an NFS client (although maybe not as portable).
To run this example, complete the following steps:
- Prepare the yml file for the WebLogic Administration Server.
Edit wls-admin.yml to mount the host folder /scratch/data to /u01/wlsdomain in the Administration Server pod:
apiVersion: apps/v1beta1 kind: Deployment metadata: name: admin-server spec: replicas: 1 template: metadata: labels: app: admin-server spec: containers: - name: admin-server image: wls-k8s-domain imagePullPolicy: Never command: ["sh"] args: ["/u01/oracle/startadmin.sh"] readinessProbe: httpGet: path: /weblogic/ready port: 8001 initialDelaySeconds: 15 timeoutSeconds: 5 ports: - containerPort: 8001 env: - name: WLUSER valueFrom: secretKeyRef: name: wlsecret key: username - name: WLPASSWORD valueFrom: secretKeyRef: name: wlsecret key: password volumeMounts: # name must match the volume name below - name: domain-home mountPath: "/u01/wlsdomain" volumes: - name: domain-home hostPath: path: /scratch/data type: Directory
- Prepare the yml file for the Managed Servers.
Edit wls-stateful.yml to mount the host folder /scratch/data to /u01/wlsdomain in the Managed Server pods:
apiVersion: apps/v1beta1 # for versions before 1.6.0 use extensions/v1beta1 kind: StatefulSet metadata: name: managed-server spec: serviceName: wls-subdomain replicas: 2 template: metadata: name: ms labels: app: managed-server spec: subdomain: wls-subdomain containers: - name: managed-server image: wls-k8s-domain imagePullPolicy: Never command: ["sh"] args: ["/u01/oracle/startms.sh"] readinessProbe: httpGet: path: /weblogic/ready port: 8011 initialDelaySeconds: 15 timeoutSeconds: 5 ports: - containerPort: 8011 env: - name: JAVA_OPTIONS value: "-Dweblogic.StdoutDebugEnabled=true" - name: USER_MEM_ARGS value: "-Xms64m -Xmx256m " - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: DNS_DOMAIN_NAME value: "wls-subdomain" - name: WLUSER valueFrom: secretKeyRef: name: wlsecret key: username - name: WLPASSWORD valueFrom: secretKeyRef: name: wlsecret key: password volumeMounts: # name must match the volume name below - name: domain-home mountPath: "/u01/wlsdomain" volumes: - name: domain-home hostPath: path: /scratch/data type: Directory
- Create the Administration Server and Managed Server pods with the shared volume. These WebLogic Server instances will start from the mounted domain location.
$ kubectl create -f wls-admin.yml $ kubectl create -f wls-stateful.yml
Use Case 2: NFS Sharing with Kubernetes PV and PVC
This example shows a WebLogic Server cluster with one Administration Server and several Managed Server instances, each server residing in a dedicated pod. All the pods have volumes mounted to a central NFS server that is located in a physical machine that the pods can reach. The first time the Administration Server pod is started, the WebLogic domain is created in the shared NFS folder. At runtime, all WebLogic Server instances, including the Administration Server, share the same domain home directory via a mounted volume by PV and PVC.
- In this sample we have the NFS server on host 10.232.128.232, which has a read/write export to all external hosts on /scratch/nfsdata.
- Prepare the PV.
Edit pv.yml to make sure each WebLogic Server instance has read and write access to the NFS shared folder:
kind: PersistentVolume apiVersion: v1 metadata: name: pv1 labels: app: wls-domain spec: storageClassName: manual capacity: storage: 10Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Recycle # Retain, Recycle, Delete nfs: # Please use the correct NFS server host name or IP address server: 10.232.128.232 path: "/scratch/nfsdata"
- Prepare the PVC.
Edit pvc.yml:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: wlserver-pvc-1 labels: app: wls-server spec: storageClassName: manual accessModes: - ReadWriteMany resources: requests: storage: 10Gi
Kubernetes will find the matching PV for the PVC, and bind them together [Kubernetes Persistent Volumes and Claims].
- Create the PV and PVC:
$ kubectl create -f pv.yml $ kubectl create -f pvc.yml
Then check the PVC status to make sure it binds to the PV:
$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE wlserver-pvc-1 Bound pv1 10Gi RWX manual 7s
- Prepare the yml file for the Administration Server. It has a reference to the PVC wlserver-pvc-1.
Edit wls-admin.yml to mount the NFS shared folder to /u01/wlsdomain in the WebLogic Server Administration Server pod:
apiVersion: apps/v1beta1 kind: Deployment metadata: name: admin-server spec: replicas: 1 template: metadata: labels: app: admin-server spec: containers: - name: admin-server image: wls-k8s-domain imagePullPolicy: Never command: ["sh"] args: ["/u01/oracle/startadmin.sh"] readinessProbe: httpGet: path: /weblogic/ready port: 8001 initialDelaySeconds: 15 timeoutSeconds: 5 ports: - containerPort: 8001 env: - name: WLUSER valueFrom: secretKeyRef: name: wlsecret key: username - name: WLPASSWORD valueFrom: secretKeyRef: name: wlsecret key: password volumeMounts: # name must match the volume name below - name: domain-home mountPath: "/u01/wlsdomain" volumes: - name: domain-home persistentVolumeClaim: claimName: wlserver-pvc-1
- Prepare the yml file for the Managed Servers. It has a reference to the PVC wlserver-pvc-1.
Edit wls-stateful.yml to mount the NFS shared folder to /u01/wlsdomain in each Managed Server pod:
apiVersion: apps/v1beta1 # for versions before 1.6.0 use extensions/v1beta1 kind: StatefulSet metadata: name: managed-server spec: serviceName: wls-subdomain replicas: 2 template: metadata: name: ms labels: app: managed-server spec: subdomain: wls-subdomain containers: - name: managed-server image: wls-k8s-domain imagePullPolicy: Never command: ["sh"] args: ["/u01/oracle/startms.sh"] readinessProbe: httpGet: path: /weblogic/ready port: 8011 initialDelaySeconds: 15 timeoutSeconds: 5 ports: - containerPort: 8011 env: - name: JAVA_OPTIONS value: "-Dweblogic.StdoutDebugEnabled=true" - name: USER_MEM_ARGS value: "-Xms64m -Xmx256m " - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: DNS_DOMAIN_NAME value: "wls-subdomain" - name: WLUSER valueFrom: secretKeyRef: name: wlsecret key: username - name: WLPASSWORD valueFrom: secretKeyRef: name: wlsecret key: password volumeMounts: - name: domain-home mountPath: "/u01/wlsdomain" volumes: - name: domain-home persistentVolumeClaim: claimName: wlserver-pvc-1
- Create the Administration Server and Managed Server pods with the NFS shared volume. Each WebLogic Server instance will start from the mounted domain location:
$ kubectl create -f wls-admin.yml $ kubectl create -f wls-stateful.yml
Summary
This blog describes the best practices of setting Kubernetes data volumes when running a WebLogic domain in a Kubernetes environment. Because Kubernetes pods are ephemeral, it is a best practice to persist the WebLogic domain to volumes, as well as files such as logs, stores, and so on. Kubernetes provides persistent volumes and persistent volume claims to simplify externalizing state and persisting important data to volumes. We provide two use cases: the first describes how to map the volume to a host machine where the Kubernetes nodes are running; and the second describes how to use an NFS shared volume. In both use cases, all WebLogic Server instances must have access to the files that are mapped to these volumes.