X
  • October 18, 2018

Going Onsite with Cloud Native Labs

Mickey Boxell
Product Management

Summary

The cloud native landscape is filled with countless interwoven projects and constantly evolving best practices making it complex to navigate. The Cloud Native Labs team brings experience, informed opinions, and best practices regarding which technologies to use as you begin to explore Kubernetes and Cloud Native development.

Last week, the Cloud Native Labs and the Oracle A-Team had the opportunity to run onsite workshops with customers in Denmark and the Czech Republic. The purpose was to share best practices and experiences running applications in a production-ready containerized environment. Using Oracle Container Engine for Kubernetes as our foundation, we discussed how to configure continuous integration and continuous deployment pipelines, logging and monitoring, and service mesh. In each two-day session, we were joined by 15-20 developers from European enterprises who brought various level of experience with Kubernetes and other cloud native technologies.

Content of the Labs  

Our presentations and labs covered the lifecycle of a company's use of Kubernetes. We began by creating a multi-node cluster with Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE), on top of Oracle Cloud Infrastructure, an enterprise-grade IaaS. This included the configuration of virtual cloud networks, setting up our node pool, and configuring the OCI CLI to download our kubeconfig file. After connecting to our newly created cluster we interacted with it using kubectl. Next, we shared how to connect the Oracle Cloud Infrastructure Registry (OCIR) to our cluster and configure Kubernetes to deploy a sample application. To operationalize the application deployment process, we set up Wercker, a cloud-based CI/CD solution connected to GitHub. This is useful to have a reproducible and repeatable process that pulls from a common repository accessible to your development team. After we updated our code in GitHub, Wercker runs an automation pipeline and builds the container image. The new image is pushed to OCIR, a Docker API compatible container registry, and then deployed to an instance of OKE, which updated the application by replacing the existing containers/pods.

After creating a cluster and configuring our CI/CD to deploy the application, we switched focus to productionizing our application. We selected a handful of open source projects from the Cloud Native Computing Foundation (CNCF) to share. Prior to diving into the operators, we discussed a commonly used package management tool for Kubernetes called Helm. We walked through how Helm can be used to package up Kubernetes resources into charts, which are useful for repeatable application installation, configuration, and versioning of Kubernetes resources. Helm is also useful because it offers a repository of charts for many CNCF projects, including the three solutions we chose to focus on: the EFK stack, Prometheus and Grafana, and Istio.

We used Helm to install the EFK stack, which consists of Elasticsearch, Fluentd, and Kibana. This stack is a useful solution for capturing application and system logs in Kubernetes, which can help with diagnosing and addressing problems impacting cluster health or your application. The application we deployed earlier on in the workshop included middleware used to log the time, route, and user-agent of any HTTP request to the application. We used Apache Benchmark to ping the application in order to produce logs. As applications start on the nodes, the FluentD daemonset tails the logs of containers underlying the application. Those logs were forwarded to Elasticsearch and given additional tags used to enhance our search. We then graphed the data in Kibana.   

In addition to showing how to capture and visualize log data, we wanted to demonstrate how to monitor a cloud native environment. For this we chose Prometheus and Grafana, an industry-standard pair of tools for application telemetry and observability. Prometheus, a time-series database, includes tools to scrape application data. Grafana is used to model that data into useful dashboards. While our application had not been alive long enough for the tool to gather particularly meaningful data, we were able to share a proof of concept showing how to instrument an application with a custom metric and display that metric on a Grafana dashboard.

The final solution we showed during the workshop was the service mesh tool, Istio. Istio is a comprehensive tool used to connect, manage, and secure microservices, such as the sample application we deployed in Kubernetes. We chose to focus on a handful of aspects of Istio, including A/B testing, mirroring and shadowing, fault injection, tracing, and service mesh monitoring. We started by deploying multiple versions of the same application. Routing rules were then deployed to demonstrate routing to different versions of the same application based upon a percentage, such as 10/90 or 50/50, which is great for A/B testing. The next pattern was mirroring/shadowing. We had multiple versions of a service deployed, production and testing, but only the production version was receiving the production payload. This allowed us to test the new application without impacting the currently deployed production version. This is great for enabling testing prior to releasing to production. We demonstrated fault-injection for negative testing and error handling. Finally, we were able to demonstrate how to use Istio for monitoring and tracing. Istio leverages Prometheus for general monitoring, Jaeger for tracing through the service calls, and the Envoy sidecar for monitoring the service mesh. These tools are useful to identify potential bottlenecks.

Developer Response

During the workshop, we received great questions around access management and applications of RBAC (role based access control). People were interested in managing access to different levels of the cluster, which is typically something users consider when they are further along their path of using Kubernetes. The developers were also interested in more information about key management and secrets. They also asked our opinions about the types of container images used to run their application.

The customers were very receptive to Oracle’s managed Kubernetes offering. The scope of the workshop went beyond the creation of a Kubernetes cluster and the deployment of a hello world application. The overarching theme was to deploy an application in a cloud native way: we walked through configuring a smart CI/CD process and how to make sure everything is resilient and properly instrumented, complete with logs, tracing, and telemetry data. We are looking forward to future customer workshops and our next chance to try Czech Pilsner.

 

 

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.