Deploying MySQL InnoDB ClusterSet across multiple Kubernetes clusters is a complex challenge. While the MySQL Operator for Kubernetes can manage a MySQL InnoDB Cluster within a single Kubernetes cluster, deploying an InnoDB ClusterSet across multiple clusters requires cross-cluster communication, which is not yet supported by the current MySQL Operator release.

This is where Cilium comes into play. Cilium offers a powerful solution by utilizing its advanced networking capabilities to enable secure and efficient cross-cluster communication. By integrating Cilium, you benefit from native Kubernetes support, simplified management, and enhanced performance for stateful applications like MySQL.

Deploying a MySQL InnoDB ClusterSet across multi-cluster Kubernetes environments improves scalability, minimizes downtime, ensures data consistency, and provides business continuity in the event of a Primary Cluster failure. It also strengthens the resilience of InnoDB Cluster deployments managed by the MySQL Operator. The diagram below illustrates a MySQL InnoDB ClusterSet, consisting of a Primary Cluster and a Replica Cluster, deployed across two Kubernetes clusters, with Cilium facilitating cross-cluster communication.

InnoDB ClusterSet on multi-cluster Kubernetes with Cilium
MySQL InnoDB ClusterSet across multi-Kubernetes Cluster with Cilium

Setting up an InnoDB ClusterSet across multiple Kubernetes clusters is currently a manual process, as outlined below. This setup can be configured for MySQL InnoDB ClusterSet running on on-premise Kubernetes Clusters or any fully-managed Kubernetes service on public cloud using MySQL container image. However, this process can be scripted and automated using CI/CD tools.
 

Deployment of Cilium Cluster Mesh

Skip this section if it’s already installed. Get Context to distinguish PROD cluster with DR Cluster.

$ kubectl config get-contexts
CURRENT   NAME                 CLUSTER              AUTHINFO           NAMESPACE
          context-cabqjcrx6nq  cluster-cabqjcrx6nq  user-cabqjcrx6nq  

*         context-che34bk2roa  cluster-che34bk2roa  user-che34bk2roa


Replace context-cabqjcrx6nq and context-che34bk2roa values with the corresponding context names in your environment. For ease of reference, let’s store kubeconfig context for PROD cluster in $CONTEXT1, and DR Cluster in $CONTEXT2.

$ export CONTEXT1=context-che34bk2roa
$ export CONTEXT2=context-cabqjcrx6nq


To install Cilium using Helm, add the Cilium helm repo and generate the manifest locally for editing:

$ helm repo add cilium https://helm.cilium.io/
$ helm show values cilium/cilium > prod-cilium.yaml
$ helm show values cilium/cilium > dr-cilium.yaml


Modify the prod-cilium.yaml and change the default to the following values.

cluster:
   name: prod
   
id: 1
hubble:
   tls:
     enabled: false
hubble:
   relay:
     enabled: true
hubble:
   ui:
     enabled: true
ipam:
   mode: “kubernetes”


Then apply to PROD Kubernetes Cluster using helm

$ helm install cilium cilium/cilium –namespace=kube-system -f prod-cilium.yaml –kube-context $CONTEXT1  


Modify the dr-cilium.yaml file to match the settings in prod-cilium.yaml, except for setting the cluster name to “dr” and the cluster ID to “2.” Then, apply it to the DR Kubernetes cluster using Helm.

$ helm install cilium cilium/cilium –namespace=kube-system -f dr-cilium.yaml –kube-context $CONTEXT2 


And on each clusters, delete all pods that are unmanaged by Cilium.

kubectl config use-context $CONTEXT1
kubectl delete pod –namespace kube-system -l k8s-app=kube-dns
kubectl delete pod –namespace kube-system -l k8s-app=hubble-relay
kubectl delete pod –namespace kube-system -l k8s-app=hubble-ui
kubectl delete pod –namespace kube-system -l k8s-app=kube-dns-autoscaler


kubectl config use-context $CONTEXT2
kubectl delete pod –namespace kube-system -l k8s-app=kube-dns
kubectl delete pod –namespace kube-system -l k8s-app=hubble-relay
kubectl delete pod –namespace kube-system -l k8s-app=hubble-ui
kubectl delete pod –namespace kube-system -l k8s-app=kube-dns-autoscaler


Run “cilium status” on each cluster to check Cilium deployment. A successful Cilium deployment will look like this in PROD cluster:
Cilium-Cluster-Mesh-1

And it will look like this in DR Cluster:

cilium-cluster-mesh-2

Cluster mesh is currently disabled in PROD cluster. To enable it, modify the prod-cilium.yaml file as follows:

clusterMesh:
   useAPIServer: true
ciliumEndpointSlice:
   enabled: true


And re-deploy Cilium using Helm.

$ helm upgrade cilium cilium/cilium –namespace=kube-system -f prod-cilium.yaml –kube-context $CONTEXT1


Check Cluster Mesh status in PROD cluster and it’s now enabled.

$ cilium clustermesh status –context $CONTEXT1  
⚠️  Service type NodePort detected! Service may fail when nodes are removed from the cluster!
Service “clustermesh-apiserver” of type “NodePort” found
Cluster access information is available:
  – 158.178.237.122:32379
Deployment clustermesh-apiserver is ready
ℹ️ KVStoreMesh is enabled
🔌 No cluster connected
🔀 Global services: [ min:0 / avg:0.0 / max:0 ]


Repeat the same step to DR Cluster. Modify the dr-cilium.yaml file with the same as prod-cilium.yaml  to enable cluster mesh and run the following:

$ helm upgrade cilium cilium/cilium –namespace=kube-system -f dr-cilium.yaml –kube-context $CONTEXT2
$ cilium clustermesh status –context $CONTEXT2  


To enable cross-cluster communication, connect PROD cluster to DR cluster using the following command:

$ cilium clustermesh connect –context $CONTEXT1 –destination-context $CONTEXT2


Check Cluster Mesh status from PROD Cluster. A successful cluster mesh will look like this.

$ cilium clustermesh status –context $CONTEXT1
⚠️  Service type NodePort detected! Service may fail when nodes are removed from the cluster!
Service “clustermesh-apiserver” of type “NodePort” found
Cluster access information is available:
  – 158.178.237.122:32379
Deployment clustermesh-apiserver is ready
ℹ️  KVStoreMesh is enabled
All 3 nodes are connected to all clusters [min:1 / avg:1.0 / max:1]
All 1 KVStoreMesh replicas are connected to all clusters [min:1 / avg:1.0 / max:1]
🔌 Cluster Connections:
  – dr: 3/3 configured, 3/3 connected – KVStoreMesh: 1/1 configured, 1/1 connected


Similarly, check Cluster Mesh status from DR Cluster.

$ cilium clustermesh status –context $CONTEXT2
⚠️  Service type NodePort detected! Service may fail when nodes are removed from the cluster!
Service “clustermesh-apiserver” of type “NodePort” found
Cluster access information is available:
  – 134.185.86.75:32379
Deployment clustermesh-apiserver is ready
ℹ️  KVStoreMesh is enabled
All 3 nodes are connected to all clusters [min:1 / avg:1.0 / max:1]
All 1 KVStoreMesh replicas are connected to all clusters [min:1 / avg:1.0 / max:1]
🔌 Cluster Connections:
  – prod: 3/3 configured, 3/3 connected – KVStoreMesh: 1/1 configured, 1/1 connected
🔀 Global services: [ min:0 / avg:0.0 / max:0 ]


MySQL InnoDB ClusterSet requires FQDN of each Pods to be recognize by each Clusters. Apply the following manifest to PROD-cluster and DR-cluster using kubectl to expose coreDNS as NodePort.

apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: core-dns-nodeport
    kubernetes.io/name: CoreDNS
  name: core-dns-nodeport
  namespace: kube-system
spec:
  ports:
  – name: dns
    port: 53
    protocol: UDP
    targetPort: 53
    nodePort: 30053
  – name: dns-tcp
    port: 53
    protocol: TCP
    targetPort: 53
    nodePort: 30053
  selector:
    k8s-app: kube-dns
  sessionAffinity: None
  type: NodePort


Assuming the manifest above is saved as coreDNS_nodeport.yaml, apply it to both clusters using the following command:

$ kubectl apply -f coreDNS_nodeport.yaml –context=$CONTEXT1
$ kubectl apply -f coreDNS_nodeport.yaml –context=$CONTEXT2

 

Prepare Environment to run InnoDB Clusters on PROD and DR

Skip this section if it’s done

Create a namespace and secret on the PROD cluster for the InnoDB Cluster. Each InnoDB Cluster within a ClusterSet must have a unique namespace name. Below, we are using mysql-prod as the namespace for the PROD InnoDB Cluster (named as “PRD” cluster) and setting the MySQL root password to root.

$ kubectl create ns mysql-prod –context=$CONTEXT1
$ kubectl -n mysql-prod create secret generic mypwds –from-literal=rootUser=root –from-literal=rootHost=% –from-literal=rootPassword=”root” –context=$CONTEXT1


We are using mysql-dr as the namespace for the DR InnoDB Cluster (named as “DR” cluster) and setting the MySQL root password to root, same as PROD InnoDB Cluster.

$ kubectl create ns mysql-dr –context=$CONTEXT2
$ kubectl -n mysql-dr create secret generic mypwds –from-literal=rootUser=root –from-literal=rootHost=% –from-literal=rootPassword=”root” –context=$CONTEXT2


On PROD cluster, edit core DNS configuration in the kube-system namespace to forward connection traffic to all DR-Cluster worker nodes IP Address and port 30053 when target DNS is mysql-dr.svc.cluster.local

$ kubectl -n kube-system edit configmap coredns –context=$CONTEXT1

configmap-coredns-1

Similarly on DR cluster, edit core DNS configuration in the kube-system namespace to forward connection traffic to all PROD Cluster worker nodes IP Address and port 30053 when target DNS is mysql-prd.svc.cluster.local

$ kubectl -n kube-system edit configmap coredns –context=$CONTEXT2

configmap-coredns-2

 

Deployment of MySQL Operator

Skip this section if it’s already installed. Install CRD used by the operator on both clusters


Download manifest file: deploy-operator.yaml using wget


Update the deploy-operator.yaml by adding the MYSQL_OPERATOR_K8S_CLUSTER_DOMAIN environment variable and setting it to the default Cilium cluster domain. See below.

env:
    – name: MYSQLSH_USER_CONFIG_HOME
      value: /mysqlsh
    – name: MYSQLSH_CREDENTIAL_STORE_SAVE_PASSWORDS
      value: never
    – name: MYSQL_OPERATOR_K8S_CLUSTER_DOMAIN
      value: cluster.local


Apply deploy-operator.yaml to each cluster:

$ kubectl apply -f deploy-operator.yaml –context=$CONTEXT1
$ kubectl apply -f deploy-operator.yaml –context=$CONTEXT2


Check MySQL Operator deployment on PROD Cluster:

$ kubectl -n mysql-operator get pod –context=$CONTEXT1
NAME                              READY   STATUS    RESTARTS   AGE
mysql-operator-7f84cb7784-x2cjg   1/1     Running   0          41s


Check MySQL Operator deployment on DR Cluster:

$ kubectl -n mysql-operator get pod –context=$CONTEXT2
NAME                              READY   STATUS    RESTARTS   AGE
mysql-operator-79bccbddcd-wj96l   1/1     Running   0          51s

 

Deploying MySQL InnoDB Cluster With MySQL Operator

Prepare a manifest file (named PRD_cluster.yaml) to deploy the InnoDB Cluster on the PROD cluster, using mysql-prod as the namespace, as shown below:

apiVersion: mysql.oracle.com/v2
kind: InnoDBCluster
metadata:
  name: prd
  namespace: mysql-prod
spec:
  secretName: mypwds
  baseServerId: 100
  tlsUseSelfSigned: true
  instances: 3
  router:
    instances: 1


Similarly, prepare another manifest file (named DR_cluster.yaml) to deploy the InnoDB Cluster on the DR cluster, using mysql-dr as the namespace, as shown below:

apiVersion: mysql.oracle.com/v2
kind: InnoDBCluster
metadata:
  name: dr
  namespace: mysql-dr
spec:
  secretName: mypwds
  baseServerId: 200
  tlsUseSelfSigned: true
  instances: 3
  router:
    instances: 1


In InnoDB ClusterSet, each instance must have a unique server_id. Therefore, the baseServerId parameter in the YAML configuration must be different for each cluster. Apply both YAML on each respective clusters using the following commands:

$ kubectl apply -f PRD_cluster.yaml –context=$CONTEXT1
$ kubectl apply -f DR_cluster.yaml –context=$CONTEXT2


Wait about 2-3 minutes and check InnoDB Cluster deployment status on PROD cluster using the following command.

$ kubectl -n mysql-prod get ic –context=$CONTEXT1
NAME   STATUS   ONLINE   INSTANCES   ROUTERS   AGE
prd    ONLINE   3        3           1         3m40s


Check InnoDB Cluster deployment status on DR cluster using the following command.

$ kubectl -n mysql-dr get ic –context=$CONTEXT2
NAME   STATUS   ONLINE   INSTANCES   ROUTERS   AGE
dr     ONLINE   3        3           1         3m40s


Both InnoDB Clusters are running with 3 Pods for MySQL instances and 1 Pod for MySQL Router. Let’s check InnoDB Cluster status with MySQL Shell using the following command:

$ kubectl -n mysql-prod –-context=$CONTEXT1 exec -it prd-0 –- mysqlsh root:root@localhost:3306 –- cluster status

ic-k8s-1

$ kubectl -n mysql-dr –context=$CONTEXT2 exec -it dr-0 — mysqlsh root:root@localhost:3306 — cluster status

ic-k8s-2

Both InnoDB Clusters are running and healthy.

For cross-cluster communication, the global service affinity must be enabled on MySQL Pods. Use the following commands to enable on PROD cluster.

$ kubectl -n mysql-prod annotate svc prd-instances service.cilium.io/affinity=”remote” –context=$CONTEXT1
$ kubectl -n mysql-prod annotate svc prd-instances service.cilium.io/global=”true” –context=$CONTEXT1
$ kubectl -n mysql-prod annotate svc prd-instances service.cilium.io/global-sync-endpoint-slices=”true” –context=$CONTEXT1
$ kubectl -n mysql-prod annotate svc prd-instances service.cilium.io/shared=”true” –context=$CONTEXT1

 

Subsequently, use the following commands to enable the global service affinity on MySQL Pods running on DR Cluster.

$ kubectl -n mysql-dr annotate svc dr-instances service.cilium.io/affinity=”remote” –context=$CONTEXT2
$ kubectl -n mysql-dr annotate svc dr-instances service.cilium.io/global=”true” –context=$CONTEXT2
$ kubectl -n mysql-dr annotate svc dr-instances service.cilium.io/global-sync-endpoint-slices=”true” –context=$CONTEXT2
$ kubectl -n mysql-dr annotate svc dr-instances service.cilium.io/shared=”true” –context=$CONTEXT2

 

Let’s test login to MySQL Pods on DR cluster from MySQL Pods on PROD cluster using FQDN

$ kubectl -n mysql-prod –context=$CONTEXT1 exec -it prd-0 — mysqlsh root:root@dr-0.dr-instances.mysql-dr.svc.cluster.local:3306

 

Similarly on DR cluster, try to login to MySQL Pods on PROD cluster from MySQL Pods on DR cluster using FQDN

$ kubectl -n mysql-dr –context=$CONTEXT2 exec -it dr-0 — mysqlsh root:root@prd-0.prd-instances.mysql-prod.svc.cluster.local:3306

 

InnoDB Clusters on both the PROD and DR clusters are accessible from the remote Kubernetes cluster using FQDN, which is required by InnoDB ClusterSet setup. This is because we edit coreDNS to forward remote FQDN traffic to remote coreDNS exposed as NodePort.

 

Configure InnoDB ClusterSet on PROD cluster

Use the following command to create InnoDB ClusterSet on PROD cluster and named it as “mycs”.

$ kubectl –context=$CONTEXT1 -n mysql-prod exec -it prd-0 — mysqlsh root:root@localhost:3306 — cluster createClusterSet mycs

 

Re-deploy MySQL Router on PROD cluster to adopt changes on cluster metadata

$ kubectl –context=$CONTEXT1 -n mysql-prod rollout restart deployment prd-router

 

Check InnoDB ClusterSet status on PROD cluster, only 1 InnoDB Cluster is found as a member.

$ kubectl –context=$CONTEXT1 -n mysql-prod exec -it prd-0 — mysqlsh root:root@localhost:3306 — clusterset status

icset-1

 

Adding Replica Cluster to The InnoDB ClusterSet from DR cluster

First, dissolve the existing InnoDB Cluster on the DR cluster and create a replica cluster named “myreplica” using the primary node of the DR cluster as the first member of the replica cluster.

Dissolve DR cluster:

$ kubectl –context=$CONTEXT2 -n mysql-dr exec -it dr-0 — mysqlsh root:root@localhost:3306 — cluster dissolve

 

Create replica cluster:

$ kubectl –context=$CONTEXT1 -n mysql-prod exec -it prd-0 — mysqlsh root:root@localhost:3306 — clusterset createReplicaCluster root:root@dr-0.dr-instances.mysql-dr.svc.cluster.local:3306 myreplica –recoveryMethod=clone

 

Check InnoDB ClusterSet status on PROD cluster, now two InnoDB Clusters are found in the ClusterSet.

$ kubectl –context=$CONTEXT1 -n mysql-prod exec -it prd-0 — mysqlsh root:root@localhost:3306 — clusterset status

icset-k8s-2

Second, export backup password, cluster admin password, and router password that’s kept in the Kubernetes secrets on PROD cluster and apply those to DR cluster.

Export InnoDB Cluster’s user passwords from PROD cluster:

$ export routerPassword=`kubectl –context=$CONTEXT1 -n mysql-prod get secret prd-router -o json | jq -r ‘.data[“routerPassword”]’`

$ export clusterAdminPassword=`kubectl –context=$CONTEXT1 -n mysql-prod get secret prd-privsecrets -o json | jq -r ‘.data[“clusterAdminPassword”]’`

$ export backupPassword=`kubectl –context=$CONTEXT1 -n mysql-prod get secret prd-backup -o json | jq -r ‘.data[“backupPassword”]’`

 

Import InnoDB Cluster’s user passwords into DR cluster:

$ kubectl –context=$CONTEXT2 -n mysql-dr get secret dr-backup -o json | jq –arg x `echo $backupPassword` ‘.data[“backupPassword”]=$x’ | kubectl –context=$CONTEXT2 apply -f –

$ kubectl –context=$CONTEXT2 -n mysql-dr get secret dr-privsecrets -o json | jq –arg x `echo $clusterAdminPassword` ‘.data[“clusterAdminPassword”]=$x’ | kubectl –context=$CONTEXT2 apply -f –

$ kubectl –context=$CONTEXT2 -n mysql-dr get secret dr-router -o json | jq –arg x `echo $routerPassword` ‘.data[“routerPassword”]=$x’ | kubectl –context=$CONTEXT2 apply -f –

 

Third, adding back all Secondary nodes into replica cluster on DR cluster

$ kubectl –context=$CONTEXT2 -n mysql-dr exec -it dr-0 — mysqlsh root:root@localhost:3306 — cluster add-instance root:root@dr-1.dr-instances.mysql-dr.svc.cluster.local:3306 –recoveryMethod=clone

$ kubectl –context=$CONTEXT2 -n mysql-dr exec -it dr-0 — mysqlsh root:root@localhost:3306 — cluster add-instance root:root@dr-2.dr-instances.mysql-dr.svc.cluster.local:3306 –recoveryMethod=clone

 

Re-deploy MySQL Router on DR cluster to adopt changes on cluster metadata

$ kubectl –context=$CONTEXT2 -n mysql-dr rollout restart deployment dr-router

 

Conclusion

Congratulations! You’ve successfully deployed a MySQL InnoDB ClusterSet across multiple Kubernetes clusters. With Cilium deployed, you can now take advantage of its ability to seamlessly connect two or more Kubernetes clusters, making them function as if they were part of the same Kubernetes cluster. This allows an InnoDB Cluster running in one Kubernetes cluster to discover and connect to another InnoDB Cluster in a remote Kubernetes cluster. As a result, you can set up an InnoDB ClusterSet to replicate data between two or more InnoDB Clusters across multiple Kubernetes clusters.