Introduction
In a previous blog it was explored how to use s3-fuse in an auxiliary pod within the deployment to access OCI Object Storage, In this blog, we will focus on how to use OCI Object Storage Buckets as a Persistent Volume Claim (PVC) in Kubernetes. The strategy we’ll discuss involves static provisioning with mountpoint.
To use our bucket as a native Kubernetes object, an additional layer is required to enable communication between Kubernetes and OCI. This is achieved through the use of a CSI driver plugin. When listing the default OKE drivers, the following can be observed:
$ kubectl get csidriver NAME ATTACHREQUIRED PODINFOONMOUNT STORAGECAPACITY TOKENREQUESTS REQUIRESREPUBLISH MODES AGE fss.csi.oraclecloud.com false false false <unset> false Persistent <age> blockvolume.csi.oraclecloud.com false false false <unset> false Persistent <age>
The fss.csi.oraclecloud.com and blockvolume.csi.oraclecloud.com CSI drivers only allow access to OCI Block Volumes (BV) and the OCI File System Service (FSS), while access to a Block Storage Object is not supported, a third-party solution is required. Taking advantage of the compatibility of Object Storage with the AWS S3 API, Mountpoint is a great choice since it is a high-throughput open source file client for mounting an S3 bucket as a local file system.
Configuring the environment
As a first step, a couple of requirements need to be met:
- An Oracle Cloud Infrastructure account will be required.
- A Customer Secret Key needs to be configured for your account in order to use this feature, you can find the steps here: S3 Compatible API.
- OCI Kubernetes Engine (OKE) or other K8 engine should be already provisioned in your tenancy and policies to use it should have been properly configured.
- A bucket should be provisioned in the tenancy
To allow OCI Kubernetes Engine to use Object Storage, access needs to be configured to allow the driver to execute all the required operations using the OCI Object Storage API. A customer secret key is generated and configured as a Secret in the kube-system namespace as follows:
secret.yaml:
apiVersion: v1 data: access_key: <customer secret token encoded in base64> key_id: <customer access key> kind: Secret metadata: name: oci-secret namespace: kube-system type: Opaque
Apply the secret.yaml in your cluster.
The mounpoint-s3-csi-driver project provides a number of examples on how to use AWS S3 with EKS, but since we’re using OCI Kubernetes Engine (OKE) a couple of settings need to be customized in order to allow it to work with the Object Storage Service:
Define the file oci-oke-values.yaml and customize the content according to your cluster:
image:
repository: public.ecr.aws/mountpoint-s3-csi-driver/aws-mountpoint-s3-csi-driver
pullPolicy: IfNotPresent
tag: "v1.14.1"
node:
kubeletPath: /var/lib/kubelet
mountpointInstallPath: /opt/mountpoint-s3-csi/bin/ # should end with "/"
logLevel: 4
seLinuxOptions:
user: system_u
type: container_t
role: system_r
level: s0
serviceAccount:
# Specifies whether a service account should be created
create: true
name: s3-csi-driver-sa
# Specify the SA's role ARN if running in EKS. Otherwise, the the driver will be "Forbidden" from accessing s3 buckets
# annotations:
# "eks.amazonaws.com/role-arn": ""
nodeSelector: {}
resources:
requests:
cpu: 10m
memory: 40Mi
limits:
memory: 256Mi
# Tolerates all taints and overrides defaultTolerations
tolerateAllTaints: false
defaultTolerations: true
tolerations: []
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: eks.amazonaws.com/compute-type
operator: NotIn
values:
- fargate
- hybrid
podInfoOnMountCompat:
enable: false
sidecars:
nodeDriverRegistrar:
image:
repository: public.ecr.aws/eks-distro/kubernetes-csi/node-driver-registrar
tag: v2.10.0-eks-1-29-7
pullPolicy: IfNotPresent
env:
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- name: plugin-dir
mountPath: /csi
- name: registration-dir
mountPath: /registration
resources: {}
livenessProbe:
image:
repository: public.ecr.aws/eks-distro/kubernetes-csi/livenessprobe
tag: v2.12.0-eks-1-29-7
pullPolicy: IfNotPresent
volumeMounts:
- mountPath: /csi
name: plugin-dir
resources: {}
initContainer:
installMountpoint:
resources: {}
controller:
serviceAccount:
# Specifies whether a service account should be created
create: true
name: s3-csi-driver-controller-sa
mountpointPod:
namespace: mount-s3
priorityClassName: mount-s3-critical
nameOverride: ""
fullnameOverride: ""
imagePullSecrets: []
awsAccessSecret:
name: oci-secret
keyId: key_id
accessKey: access_key
sessionToken: session_token
experimental:
podMounter: false1
In the original Chart values, the SELinux option type was set to super_t, however we have changed this to container_t, adjust the policy to your scenario. By default a super_t policy is not created in OKE upon provisioning but a new policy with a higher priority/privilege than container_t could be created and used if necessary.
Finally use Helm to add the repository and install the Chart using the previously defined values:
$ helm repo add aws-mountpoint-s3-csi-driver https://awslabs.github.io/mountpoint-s3-csi-driver $ helm repo update $ helm upgrade --install aws-mountpoint-s3-csi-driver -f oci-oke-values.yaml --namespace kube-system aws-mountpoint-s3-csi-driver/aws-mountpoint-s3-csi-driver
The option --dry-run could be used to visualize what changes will be applied to the kube-system namespace. To ensure all is working as expected, wait for the pods to be in a Running state.
$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE ... s3-csi-node-4hjvw 3/3 Running 0 24s s3-csi-node-mh9hz 3/3 Running 0 24s s3-csi-node-s42ws 3/3 Running 0 24s
The CSI driver should be available in the OCI Kubernetes Engine:
$ kubectl get csidriver NAME ATTACHREQUIRED PODINFOONMOUNT STORAGECAPACITY TOKENREQUESTS REQUIRESREPUBLISH MODES AGE ... s3.csi.aws.com false false false sts.amazonaws.com true Persistent 5m ...
Object Storage Bucket as Persistent Volume Claim (PVC)
Once the CSI driver configuration is completed, the next step is to define the objects that will create the persistent volume, the persistent volume claim and the deployment using it. Is important to notice some mountpoint options are required due to compatibility between S3 and the OCI Object Storage API. Such as force-path-style, endpoint-url, region, and upload-checksum, but more options are available for mountpoint-s3 in case further customization is required.
app.yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: oci-s3-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany # Supported options: ReadWriteMany / ReadOnlyMany
storageClassName: "" # Required for static provisioning
claimRef: # To ensure no other PVCs can claim this PV
namespace: default
name: oci-s3-pvc
mountOptions:
- force-path-style
- region <oci-region>
- upload-checksums off
- endpoint-url https://<oci-tenancy>.compat.objectstorage.<oci-region>.oraclecloud.com
csi:
driver: s3.csi.aws.com
volumeHandle: s3-csi-driver-volume
volumeAttributes:
bucketName: <your bucket name>
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: oci-s3-pvc
spec:
accessModes:
- ReadWriteMany # Supported options: ReadWriteMany / ReadOnlyMany
storageClassName: "" # Required for static provisioning
resources:
requests:
storage: 5Gi
volumeName: oci-s3-pv
---
apiVersion: v1
kind: Pod
metadata:
name: object-storage-app
spec:
containers:
- name: app
image: container-registry.oracle.com/os/oraclelinux:9
command: ["/bin/sh"]
args: ["-c", "tail -f /dev/null"] # replace for your process
volumeMounts:
- name: object-storage-service
mountPath: /data
volumes:
- name: object-storage-service
persistentVolumeClaim:
claimName: oci-s3-pvc
In the above example replace “
”, “
” and “
” with your own tenancy name, deployment region and object storage bucket name, and then apply the
app.yaml and access the deployment to ensure your Object Storage Bucket was mounted correctly:
$ kubectl get pods NAME READY STATUS RESTARTS AGE object-storage-app 1/1 Running 0 9s $ kubectl exec -it object-storage-app -- bash bash-5.1# mount -v ... mountpoint-s3 on /data type fuse (rw,nosuid,nodev,noatime,user_id=0,group_id=0,default_permissions) ... bash-5.1# echo "hello world" > /data/hello.txt bash-5.1# ls /data hello.txt ... bash-5.1# cat /data/hello.txt hello world
If you’re unable to write or read from the mounted Object Storage Filesystem, check your mount options and increase verbosity for the CSI driver and deployment mount options to see more relevant information related to the operations performed in the background between OKE, the CSI driver and OCI.
Important notes
There is a number of different third-party plugins and drivers that could be used with Object Storage buckets other than mountpoint, which are also available as CSI drivers for Kubernetes, these solutions could be explored if necessary until more official support for Object Storage in OKE is available (like a native CSI driver for OCI Object Storage).
The Object Storage API’s compatibility with S3 enables integration with third-party tools and allows support from open-source community implementations.
For more information, see the following resources:
