As Kubernetes adoption grows across major cloud providers, it's interesting to compare Kubernetes itself to the concept of an operating system. According to Wikipedia, an operating system is defined as “system software that manages computer hardware and software resources and provides common services for computer programs.” Abstractly, this isn't so different from the current model of Kubernetes running on top of a cloud provider, servicing applications that are built to run on top. If we start to think about Kubernetes in this context, what can we learn about where Kubernetes has been and where it is heading?
Those who operate data centers have long understood the value of standardizing on a smaller set of underlying components, including the operating system, to minimize operating costs and overhead. Customers and vendors alike have rallied around Kubernetes ahead of the alternatives, recognizing the value of an open (albeit somewhat complex) standard for container orchestration.
As enterprises have adopted container technology, they too have recognized the opportunity to build on this open Kubernetes platform, as a way to ease their transition from on-premises applications to the cloud, avoid lock-in across cloud providers, and provide the future fabric for hybrid deployments and even serverless applications.
We typically think about an operating system as part of a “sandwich”—the layer between the (hardware and software) resources below it, and the applications running on top. In the context of our Kubernetes analogy, a cloud provider (or on-premises data center) is underneath, and business applications are on top. In general, the job of the operating system layer here is to abstract away the complexity of interacting with the underlying resources, and make it easier for applications to be built and run.
Of course, not all providers are created equal here. Just as I can run Linux on a Raspberry Pi or on a high-end bare metal server, I can run Kubernetes on clouds with varying degrees of sophistication. The right cloud fabric—with high predictable performance from underlying compute, storage, and network, as well as security, governance, and control interfaces—is crucial to enabling enterprise-grade Kubernetes and the applications that use it.
Just as Linux has expanded way beyond the kernel, the “Cloud OS” of the future will go beyond base Kubernetes to include what are generally thought of today as “Kubernetes add-ons” but are really necessary enabling components of a cloud (or data center) OS. Relevant examples are things like service meshes (Istio and Linkerd), serverless functions (Fn project), monitoring, and logging add-ons, as well as Kubernetes “operators”—a framework that enables (stateful) applications on Kubernetes (for example, a WebLogic Operator and a MySQL Operator, as well potentially even operators for Kubernetes itself).
Cloud providers will move to package all these components into managed Cloud OSs, which can shield their users, developers, and enterprises from the complexities of managing their own container infrastructure, particularly in high-availability contexts, and ensure the ongoing integrity of the OS and the compatibility with the service layers underneath.
This is what we are working towards at Oracle Cloud Infrastructure: an open, standards-based Cloud OS that is based on unmodified, upstream, open-source projects, managed on an enterprise-grade cloud infrastructure with superior performance, availability, and security. Our customers will be able to use it to run their business applications with confidence and with the freedom to move them between data centers and clouds.
Where are you on the journey towards a Cloud OS? We’d welcome the opportunity to talk to you about your current container strategy, see if we can help you, and get your feedback about our plans.
Sr. Director, Product Management