Verrazzano is a hybrid multicloud Kubernetes-based Enterprise Container Platform for running both cloud-native and traditional applications. It not only brings a nice set of pre-installed components like Rancher, Istio with Kiali, Keycloak, Prometheus, Grafana, and much more; but it is totally aligned with Open Application Model (OAM) from CNCF, an open model for defining cloud native apps. OAM enables a simple yet robust application delivery across hybrid environments including Kubernetes, cloud, or even IoT devices.
For more details about OAM and Verrazzano, I recommend you to read the blog post “Deploying your first application to Verrazzano“. By the way, it is worth highlighting that Verrazzano not only provides three OAM Workloads that makes it easy to run Weblogic, Coherence and Helidon applications on it, but it also provides a way to run ordinary kubernetes resources on it.
This blog post will use this capability of Verrazzano, and show how you can take any application that is already deployed in a Kubernetes Cluster and move it to Verrazzano without any source code or container modification. And to show this, we will use Sock Shop: A Microservice Demo Application. Sock Shop is a well-known open source, (Apache License, Version 2.0) and is free to use for talks, testing and demo’s.
The Sock Shop covers different use cases from frontend to persistence, and it’s composed of the following microservices components:
| Component |
Language / Framework (Port) |
Database / Broker (Port) |
| Java / Spring Boot (80) |
MongoDB (27017) |
|
| Go (80) |
MySQL (3306) |
|
| HTML + Javascript (80) |
Redis (6379) |
|
| Java / Spring Boot (80) |
MongoDB (27017) |
|
| Go (80) |
||
| Java / Spring Boot (80) |
RabbitMQ (5672) |
|
| Java / Spring Boot (80) |
RabbitMQ (5672) |
|
| Go (80) |
MongoDB (27017) |
Furthermore, this project already provides all the kubernetes YAML files needed to deploy the whole application with a single command, which is really helpful. These YAML files will be used as a template for our OAM Component definitions as Verrazzano also supports the deployment of ordinary Kubernetes resources as already mentioned above.
Before starting this migration, It’s recommended that you run the application on a real Kubernetes cluster, to guarantee that everything is working. The steps involved to evaluate Sock Shop in OKE is described in the Appendix of this blog post.
The migration process
Once we have Verrazzano installed in our Kubernetes OCI cluster (or even locally using minikube), the first thing that we need to define is a kubernetes namespace using the labels verrazzano-managed=true and istio-injection=enabled. To keep this migration separated from the original application, let’s use a namespace named `sock-shop-v8`. The resulting YAML should look like this:
apiVersion: v1 kind: Namespace metadata: name: sock-shop-v8 labels: verrazzano-managed: "true" istio-injection: enabled
The next step is to convert every existing Kubernetes deployment for the containers listed in the original YAML file to become a Verrazzano OAM Component. As mentioned before, we will use a Kubernetes resource workload for each existing container. For example, the carts deployment yaml should be converted using the pattern presented in the following picture:
It’s important to note that the Kubernetes Service descriptor needs also to be converted following the same pattern as we can see in the following picture:
The only thing that we didn’t move in the Service descriptor was the annotation prometheus.io/scrape: ‘true’. This is because we will use a MetricsTrait in the OAM ApplicationConfiguration.
Another thing that needs to be mentioned, is that the carts deployment, and the carts service, will become two distinct OAM Components with two distincts Kubernetes Workloads (Deployment and Service). When we reference these two OAM components in the ApplicationConfiguration, they need to be referenced by their names. Because they are both OAM Components, we can’t give them the same name (carts). Note that the OAM Component for the Kubernetes Deployment workload has the name carts-deployment, and the OAM Component for the Kubernetes Service workload has the name carts-service.
Define the Metrics Traits
The original Kubernetes Sock Shop file contains the annotation prometheus.io/scrape: ‘true’ in the following services:
-
Carts
-
Catalogue
-
Orders
-
Payment
-
Shipping
-
User.
This annotation allows Prometheus to scrape metrics from these services. However, for Verrazzano, we can use a MetricsTrait attached to the OAM Components. Example:
- componentName: carts-deployment
traits:
- trait:
apiVersion: oam.verrazzano.io/v1alpha1
kind: MetricsTrait
spec:
port: 80
scraper: verrazzano-system/vmi-system-prometheus-0
Note that we need to inform the port 80, which is the port where this application exposes the /metrics endpoint, otherwise the MetricsTrait will use the default port 8080.
Adding the Ingress Trait.
At the end of the migration, It’s necessary to declare an Ingres to the front-end-service component.
The oam-kubernetes-runtime is not installed with privileges that allow it to create the Kubernetes Ingress resource needed. Before deploying the application. It’s also necessary to create a role that allows Ingress resource creation and binds that role to the oam-kubernetes-runtime service account. For this demo to work, the following YAML content should also be deployed:
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: oam-kubernetes-runtime-ingresses rules: - apiGroups: - networking.k8s.io - extensions resources: - ingresses verbs: - create - delete - get - list - patch - update --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: oam-kubernetes-runtime-ingresses roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: oam-kubernetes-runtime-ingresses subjects: - kind: ServiceAccount name: oam-kubernetes-runtime namespace: verrazzano-system
The oam-kubernetes-runtime operator will process the ApplicationConfiguration and extract the Ingress to a separate resource during deployment. In the following sample, note that the Ingress is the Kubernetes Ingress, not the IngressTrait provided by Verrazzano
When declaring it, make sure to point it to the Kubernetes Service called front-end, and the port 80 (the port exposed by this service). Example:
traits:
- trait:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: sock-shop-v8-ingress
annotations:
kubernetes.io/ingress.class: istio
spec:
rules:
- host: sockshop.A.B.C.D.nip.io
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: front-end
port:
number: 80
Make sure to replace the A.B.C.D in the host name with the Load Balancer IP shown with the command:
kubectl get ingress \
-n sock-shop-v8 sock-shop-v8-ingress \
-o jsonpath='{.status.loadBalancer.ingress[0].ip}'
This last step concludes the migration of Sock Shop to Verrazzano, and after deploying the application. You can see the resulting Verrazzano YAMLs in this URL: https://github.com/rafabene/microservices-demo/blob/master/deploy/kubernetes/oci/sock-shop-v8.yaml
After deploying it, Sock Shop will be available at http://sockshop.A.B.C.D.nip.io/ and the metrics will be scraped to Prometheus.
Conclusion
The migration process is very straight forward, and as you could see, no source code or container modification is needed to run your application in Verrazzano and start using other features like a Multicluster Environment, Monitoring and Logging, and use many other components that are deployed together with Verrazano like Istio and Kiali for example.
Check out more details about Verrazzano on its webpage: https://verrazzano.io and https://www.oracle.com/java/verrazzano/ . At the Verrazzano’s Youtube channel, you will see a Verrazzano tour video and much more. News about this project is shared on Verrazzano’s Twitter profile. Verrazzano is a fast paced evolving open source project. You can follow, and also contribute to this awesome project on the Github project page.
Appendix: Pre-migration steps – Running Sock Shop on OKE
Running the Sock Shop demo on any Kubernetes cluster is really simple. You can set up a cluster using minikube locally, or a Kubernetes Cluster in OCI and execute the instructions described on the project’s page: Deploying Sock Shop on any Kubernetes cluster.
You can easily create an OCI cluster with a few clicks using OKE. After spinning up the cluster and having access to it, the following command will install everything (Namespace, Deployments and Services) for you:
kubectl apply -f https://raw.githubusercontent.com/microservices-demo/microservices-demo/master/deploy/kubernetes/complete-demo.yaml
However, if you just follow the instructions from the project page, you will realize that some “database” container’s will “crash”. And if you open the logs, you will realize that they fail because their root filesystems are “Read Only”.
Why use a “read only” root filesystem in Kubernetes?
It is worth remembering that we should apply security at all layers. Using the “readOnlyRootFilesystem: true” should prevent an attacker from performing any unwanted modifications in the application for defacing it, or prevent the installation of malicious code.
While it’s clear that we should not store data inside containers, some database containers use control files (aka lock files) to work. In this case, having a read only root filesystem in such database containers, might cause them to fail.
This is why we should remove the “readOnlyRootFilesystem: true” from these containers in order to make them work properly.
Investigation about the “readOnlyRootFilesystem” in Sock Shop.
Before removing the “readOnlyRootFilesystem” from these database containers, It has been realized that this instruction has been added since 2016 for all services but catalogue-db (MySQL) and queue-master (RabbitMQ). The commit message “Not working with catalogue-db mysql container and queue-master due to issues” makes it very clear. MongoDB images seemed to work fine at that point. However, some later updates in this image could have caused that issue. The kubernetes file doesn’t use any image tag to “lock” a specific MongoDB version. In fact, any new MongoDB image that is released will be used in this demo.
Moreover, there’s an instruction at the website that shows how to run the demo using minikube, and it works without such failures. The reason is that minikube doesn’t come with any Pod Security Admission enabled.
Exposing Sock Shop in OKE.
The original Sock Shop frontend allows external access using a fixed NodePort on port 30001. For security reasons, it’s recommended that the kubernetes nodes shouldn’t be publicly available. Because of this recommendation, we can’t access the Node ports using the NodePort service type. Instead, an easy way to expose the frontend to Sock Shop is to change it to LoadBalancer.
Once that we change the frontend service from NodePort to LoadBalancer, OKE will automatically provision a load balancer, and we can get the public IP to access the frontend service using the following command:
kubectl get svc -n sock-shop
Just open the frontend EXTERNAL-IP in the browser and you will see the Sock Shop frontend.
Appendix Conclusion
OKE is secure by default. Thus, it will determine if the pod should be admitted to the cluster based on the requested security context in the pod spec. This means that the containers carts-db, orders-db, users-db and rabbitmq will crash because they have a read only root filesystem.
The issue related to “read only root filesystem” has been reported to the original repository. The fixed kubernetes deployment file that contains also the LoadBalancer service for the frontend can be found here: complete-demo.yaml. This is the file used to deploy and test the Sock Shop before migrating it to Verrazzano.
