X

The latest cloud infrastructure announcements, technical solutions, and enterprise cloud insights.

Running the Linkerd Service Mesh on Container Engine for Kubernetes

Gilson Melo
Director of Product Management

As part of our continuing commitment to open standards and supporting a broad and varied ecosystem, we’re pleased to announce that Buoyant has extended its Linkerd service mesh support to Oracle Cloud Infrastructure Container Engine for Kubernetes (sometimes referred to as OKE).

This post was written by a guest contributor, Charles Pretzer, Field Engineer at Buoyant and a Linkerd Community Member.

Over the course of my career, I’ve had the opportunity to work with many Oracle products. So, when I heard that Oracle began offering its Container Engine for Kubernetes, I had to try it right away. The Linkerd service mesh is designed to run on all flavors of Kubernetes, so getting Linkerd up and running on Container Engine for Kubernetes seemed like the right way to test it out.

This post is a step-by-step guide to installing Linkerd on Container Engine for Kubernetes. It also looks at some of the features that Linkerd offers by deploying a sample application to test the service mesh functionality.

The Value of Running Linkerd on Container Engine for Kubernetes

Before jumping into the processes of provisioning a cluster and installing Linkerd, let’s take a moment to explore how Linkerd’s functionality benefits applications running on Container Engine for Kubernetes. Distributed applications are complex, especially when compared to their monolithic counterparts. The concept of a service mesh was created to provide meaningful insights into distributed applications by offering features like observability, security, reliability, and traffic management.

Observability

Observability is a key concept of Linkerd because Linkerd provides high-resolution telemetry about the latencies, success rates, and overall network performance of each of the services in your application. The Linkerd proxy is written in Rust from the ground up to be high-performing and memory-safe, and it’s deployed using the sidecar pattern. This pattern puts Linkerd in the data path, allowing it to intercept traffic and collect valuable telemetry about the requests made between services. This telemetry can ultimately be used to reduce Mean Time To Detect (MTTD) and Mean Time To Repair (MTTR) for errors and issues in the distributed application.

Security

Another benefit to the sidecar pattern is that Linkerd can encrypt and decrypt the traffic between the services, which results in mutual TLS communication between the services. This is ideal in a multiple-tenant or zero-trust environment because anyone sniffing traffic gets only encrypted data.

Reliability

Reliability is another important concept of a service mesh. Linkerd load-balances requests by using an exponentially weighted moving average (EWMA) algorithm to ensure that the next request goes to the instance of a service with the lowest latency. This prevents any single instance of a service from being overloaded with requests while other instances remain idle. Further, the proxy can be configured with retries and timeouts to tune traffic for specific services.

Traffic Management

Speaking of traffic, the last core concept of Linkerd is managing or shaping traffic. In many conversations that I have with folks, they want to automate the deployment of new versions of code through their CI/CD system. The service mesh can use the success-rate metrics that it collects to determine the health of a newly deployed version of a service. This analysis provides the foundation for enabling canary deployments in an environment following DevOps practices. Linkerd played a role in the development of the Service Mesh Interface specification, which defines a standard for common service mesh features including the Traffic Split spec. Following this specification, Linkerd emits metrics that automated systems can use to slowly shift traffic between versions of services.

Installing and Using Linkerd

To be clear, it’s possible to run an application without a service mesh like Linkerd. But doing so means one of two things: the concepts previously described are excluded from the infrastructure, or the business logic must include code to monitor, secure, and manage the traffic. This work can be a large burden on the developers, so let's look at how to drop Linkerd into an application to get these awesome features.

Step 1: Install kubectl

In this walkthrough, we use kubectl version 1.15 or later. If you don’t already have kubectl installed, install it.

Step 2: Create a Container Engine for Kubernetes Cluster

The detailed Oracle documentation walks you through the steps for using the Oracle Cloud Infrastructure Console to create a cluster.

Step 3: Download the kubeconfig File

Download the kubeconfig file to your local machine so that you can access the cluster.

Step 4: Verify the Connection

Verify the connection by using the kubectl version command. You should see the following output:

Client Version: v1.18.0
Server Version: v1.15.7

Step 5: Install Linkerd

Installing Linkerd is easy, thanks to the detailed installation instructions. Linkerd has a command line interface (CLI), which is designed to work like kubectl, so it should be familiar.

  1. Install the CLI, following the instructions to add the executable to your PATH environment variable:

    $ curl -sL https://run.linkerd.io/install | sh
  2. Check the Linkerd version:

    $ linkerd version
  3. Ensure that Linkerd has the permissions it needs to run on the cluster:

    $ linkerd check --pre
  4. Install the Linkerd control plane:

    $ linkerd install | kubectl apply -f -
  5. To ensure that the control plane is running, run the linkerd check command without --pre:

    $ linkerd check
  6. Open the Linkerd dashboard to see the namespaces and deployments:

    $ linkerd dashboard &

Step 6: Deploy a Sample Application

Deploy a sample application, emojivoto, which creates a namespace:

$ curl -sL https://run.linkerd.io/emojivoto.yml | kubectl apply -f -

Ensure that the emojivoto application is running by viewing the UI:

$ kubectl -n emojivoto port-forward svc/web-svc 8080:80

Step 7: Add an Annotation to the Namespace

Linkerd has an auto-injection capability, which you can apply by adding the linkerd.io/inject: enabled annotation to the pod spec of a specific workload or at the namespace level. For this walkthrough, inject all the deployments in the emojivoto namespace by adding the annotation to the namespace.

$ kubectl annotate ns emojivoto linkerd.io/inject=enabled

With the dashboard open, restart the deployments to inject the proxy into the pods:

$ kubectl rollout restart deploy -n emojivoto

The dashboard automatically updates as each pod is restarted with the Linkerd proxy and is added to the mesh.

Step 8: View Performance Metrics

So far, we’ve used the Linkerd dashboard to see deployments, which are and aren’t part of the mesh. You can drill down into the emojivoto namespace and the deployments there to see real-time performance metrics for the services.

The Linkerd CLI offers commands that output the same information. For example, the emoji deployment in the emojivoto namespace of the dashboard shows the success rates and latencies for the pods in that deployment. You can get the same information with the CLI by using the stat command:

$ linkerd stat -n emojivoto deploy emoji

The emojivoto namespace page of the dashboard shows a simple service graph of each of the services that are communicating with each other. You can get this same output in tabular format from the edges command:

$ linkerd edges -n emojivoto deploy

Summary

As you can see, the Linkerd team has focused on making the user experience simple but powerful—just like Container Engine for Kubernetes. Hopefully, you've followed this guide and found out just how easy it is to provision a Container Engine for Kubernetes cluster and then get Linkerd up and running on it with a sample application.

I encourage you to deploy your application workloads to Container Engine for Kubernetes and then add them to the Linkerd service mesh to see real workloads running and emitting actionable telemetry. If you don’t already have an Oracle Cloud account, sign up for a free trial today.

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.