As part of certifying WebLogic Server on Kubernetes, we have identified best practices for sharing file data among WebLogic Server pods that are running in a Kubernetes environment. In this blog, I review the WebLogic Server services and files that are typically configured to leverage shared storage, and I provide full end-to-end samples, which you can download and run, that show mounting shared storage for a WebLogic domain that is orchestrated by Kubernetes.
When running WebLogic Server on Kubernetes, refer to the blog Docker Volumes in WebLogic for information about the advantages of using data volumes. This blog also identifies the WebLogic Server artifacts that are good candidates for being persisted in those data volumes.
In a Kubernetes environment, pods are ephemeral. To persist data, Kubernetes provides the Volume abstraction, and the PersistentVolume (PV) and PersistentVolumeClaim (PVC) API resources.
Based on the official Kubernetes definitions [Kubernetes Volumes and Kubernetes Persistent Volumes and Claims], a PV is a piece of storage in the cluster that has been provisioned by an administrator, and a PVC is a request for storage by a user. Therefore, PVs and PVCs are independent entities outside of pods. They can be easily referenced by a pod for file persistence and file sharing among pods inside a Kubernetes cluster.
When running WebLogic Server on Kubernetes, using PVs and PVCs to handle shared storage is recommended for the following reasons:
To see the details about the samples, or to run them locally, please download the examples and follow the steps provided below.
Host machine: Oracle Linux 7u3 UEK4 (x86-64)
Kubernetes v1.7.8
Docker 17.03 CE
Put the sample source code to a local folder named wls-k8s-domain.
$ cd wls-k8s-domain $ docker build -t was-k8s-domain .
# systemctl start rpcbind.service # systemctl start nfs.service # systemctl start nfslock.service $ mkdir -p /scratch/nfsdata $ chmod o+rw /scratch/nfadata # echo /scratch/nfsdata *(rw,fsid=root,no_root_squash,no_subtree_check) >> /etc/exports
By default, in the WebLogic domain wls-k8s-domain, all processes in pods that contain WebLogic Server instances run with user ID 1000 and group ID 1000. Proper permissions need to be set to the external NFS shared directory to make sure that user ID 1000 and group ID 1000 have read and write permission to the NFS volume. To simplify the permissions management in the examples, we grant read and write permission to others to the shared directory as well.
The WebLogic domain consists of an Administration Server and multiple Managed Servers, each running inside its own pod. All pods have volumes directly mounted to a folder on the physical machine. The domain home is created in a shared folder when the Administration Server pod is first started.
At runtime, all WebLogic Server instances, including the Administration Server, share the same domain home directory via a mounted volume.
Note: This example runs on a single machine, or node, but this approach also works when running the WebLogic domain across multiple machines. When running on multiple machines, each WebLogic Server instance must share the same directory. In turn, the host path can refer to this directory, thus access to the volume is controlled by the underlying shared directory. Given a set of machines that are already set up with a shared directory, this approach is simpler than setting up an NFS client (although maybe not as portable).
To run this example, complete the following steps:
Edit wls-admin.yml to mount the host folder /scratch/data to /u01/wlsdomain in the Administration Server pod:
apiVersion: apps/v1beta1 kind: Deployment metadata: name: admin-server spec: replicas: 1 template: metadata: labels: app: admin-server spec: containers: - name: admin-server image: wls-k8s-domain imagePullPolicy: Never command: ["sh"] args: ["/u01/oracle/startadmin.sh"] readinessProbe: httpGet: path: /weblogic/ready port: 8001 initialDelaySeconds: 15 timeoutSeconds: 5 ports: - containerPort: 8001 env: - name: WLUSER valueFrom: secretKeyRef: name: wlsecret key: username - name: WLPASSWORD valueFrom: secretKeyRef: name: wlsecret key: password volumeMounts: # name must match the volume name below - name: domain-home mountPath: "/u01/wlsdomain" volumes: - name: domain-home hostPath: path: /scratch/data type: Directory
Edit wls-stateful.yml to mount the host folder /scratch/data to /u01/wlsdomain in the Managed Server pods:
apiVersion: apps/v1beta1 # for versions before 1.6.0 use extensions/v1beta1 kind: StatefulSet metadata: name: managed-server spec: serviceName: wls-subdomain replicas: 2 template: metadata: name: ms labels: app: managed-server spec: subdomain: wls-subdomain containers: - name: managed-server image: wls-k8s-domain imagePullPolicy: Never command: ["sh"] args: ["/u01/oracle/startms.sh"] readinessProbe: httpGet: path: /weblogic/ready port: 8011 initialDelaySeconds: 15 timeoutSeconds: 5 ports: - containerPort: 8011 env: - name: JAVA_OPTIONS value: "-Dweblogic.StdoutDebugEnabled=true" - name: USER_MEM_ARGS value: "-Xms64m -Xmx256m " - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: DNS_DOMAIN_NAME value: "wls-subdomain" - name: WLUSER valueFrom: secretKeyRef: name: wlsecret key: username - name: WLPASSWORD valueFrom: secretKeyRef: name: wlsecret key: password volumeMounts: # name must match the volume name below - name: domain-home mountPath: "/u01/wlsdomain" volumes: - name: domain-home hostPath: path: /scratch/data type: Directory
$ kubectl create -f wls-admin.yml $ kubectl create -f wls-stateful.yml
This example shows a WebLogic Server cluster with one Administration Server and several Managed Server instances, each server residing in a dedicated pod. All the pods have volumes mounted to a central NFS server that is located in a physical machine that the pods can reach. The first time the Administration Server pod is started, the WebLogic domain is created in the shared NFS folder. At runtime, all WebLogic Server instances, including the Administration Server, share the same domain home directory via a mounted volume by PV and PVC.
Edit pv.yml to make sure each WebLogic Server instance has read and write access to the NFS shared folder:
kind: PersistentVolume apiVersion: v1 metadata: name: pv1 labels: app: wls-domain spec: storageClassName: manual capacity: storage: 10Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Recycle # Retain, Recycle, Delete nfs: # Please use the correct NFS server host name or IP address server: 10.232.128.232 path: "/scratch/nfsdata"
Edit pvc.yml:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: wlserver-pvc-1 labels: app: wls-server spec: storageClassName: manual accessModes: - ReadWriteMany resources: requests: storage: 10Gi
Kubernetes will find the matching PV for the PVC, and bind them together [Kubernetes Persistent Volumes and Claims].
$ kubectl create -f pv.yml $ kubectl create -f pvc.yml
Then check the PVC status to make sure it binds to the PV:
$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE wlserver-pvc-1 Bound pv1 10Gi RWX manual 7s
Edit wls-admin.yml to mount the NFS shared folder to /u01/wlsdomain in the WebLogic Server Administration Server pod:
apiVersion: apps/v1beta1 kind: Deployment metadata: name: admin-server spec: replicas: 1 template: metadata: labels: app: admin-server spec: containers: - name: admin-server image: wls-k8s-domain imagePullPolicy: Never command: ["sh"] args: ["/u01/oracle/startadmin.sh"] readinessProbe: httpGet: path: /weblogic/ready port: 8001 initialDelaySeconds: 15 timeoutSeconds: 5 ports: - containerPort: 8001 env: - name: WLUSER valueFrom: secretKeyRef: name: wlsecret key: username - name: WLPASSWORD valueFrom: secretKeyRef: name: wlsecret key: password volumeMounts: # name must match the volume name below - name: domain-home mountPath: "/u01/wlsdomain" volumes: - name: domain-home persistentVolumeClaim: claimName: wlserver-pvc-1
Edit wls-stateful.yml to mount the NFS shared folder to /u01/wlsdomain in each Managed Server pod:
apiVersion: apps/v1beta1 # for versions before 1.6.0 use extensions/v1beta1 kind: StatefulSet metadata: name: managed-server spec: serviceName: wls-subdomain replicas: 2 template: metadata: name: ms labels: app: managed-server spec: subdomain: wls-subdomain containers: - name: managed-server image: wls-k8s-domain imagePullPolicy: Never command: ["sh"] args: ["/u01/oracle/startms.sh"] readinessProbe: httpGet: path: /weblogic/ready port: 8011 initialDelaySeconds: 15 timeoutSeconds: 5 ports: - containerPort: 8011 env: - name: JAVA_OPTIONS value: "-Dweblogic.StdoutDebugEnabled=true" - name: USER_MEM_ARGS value: "-Xms64m -Xmx256m " - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: DNS_DOMAIN_NAME value: "wls-subdomain" - name: WLUSER valueFrom: secretKeyRef: name: wlsecret key: username - name: WLPASSWORD valueFrom: secretKeyRef: name: wlsecret key: password volumeMounts: - name: domain-home mountPath: "/u01/wlsdomain" volumes: - name: domain-home persistentVolumeClaim: claimName: wlserver-pvc-1
$ kubectl create -f wls-admin.yml $ kubectl create -f wls-stateful.yml
This blog describes the best practices of setting Kubernetes data volumes when running a WebLogic domain in a Kubernetes environment. Because Kubernetes pods are ephemeral, it is a best practice to persist the WebLogic domain to volumes, as well as files such as logs, stores, and so on. Kubernetes provides persistent volumes and persistent volume claims to simplify externalizing state and persisting important data to volumes. We provide two use cases: the first describes how to map the volume to a host machine where the Kubernetes nodes are running; and the second describes how to use an NFS shared volume. In both use cases, all WebLogic Server instances must have access to the files that are mapped to these volumes.