Reimagining Startup and Enterprise Innovation

How Kubernetes Supports Cloud Native Startup Technology

Vikas Raina
Principal Cloud Architect, Oracle for Startups

Imagine a nondescript town where only bus service is available for travel. Sometimes, the bus gets overcrowded. Other times, there is hardly anyone in it. Privacy is non-existent, and luggage is mixed up. On bad days, the bus breaks down and brings the whole journey to a grinding halt.

Then one day, the town gets a railway station. The train has multiple cars that allow the crowd to disperse. On slow days, cars can be removed and added again in the case of a spike in passengers. A group of passengers going to a common destination can be accommodated in the same car, and if a car breaks down, a new one can be attached in its place and passengers moved into it. When needed, cars can be attached to a completely different train, along with the passengers inside. The whole train is controlled by a central unit (the engine).

You get the drift.

The train runs similarly to how Kubernetes and containers work. The train cars are the pods that exist as standalone entities and carry a piece of an application. Passengers are the containers that carry applications along with their libraries (exemplified by the luggage in our train scenario).         
Multiple containers make up an application, and these containers are controlled by a central HQ called Kubernetes

Kubernetes is a container orchestration and management system, and belongs to the cloud-native technology stack. It rests on declarative constructs, which lists out the composition of applications interaction and management. It provides flexibility, elasticity, and easier isolation of application chunks, and startups have access to Kubernetes on Oracle Cloud Infrastructure as part of Oracle for Startups. 

Under the hood with Kubernetes

Kubernetes, at its heart, is an open-source orchestration platform that ensures that containers are continuously running, healthy, and available. If a container dies, another container is created in its place. It provides the application portability layer. 

Kubernetes architecture is composed of:

  • Master. Entry point for administrative tasks.
  • Cluster. Collection of servers which perform various tasks.
  • Node. Where the Pod runs and pulls out an image from the container image registry to launch a container. More often than not, this is a VM, managed by Kubernetes. Nodes are further divided into master (control worker nodes) and worker nodes (where the application runs).
  • Pod. Smallest building block in Kubernetes universe that is deployed and runs on Nodes. Pods are a collection of containers that needs to clubbed together and coexist.
  • Container. Containers are where the application is deployed, abstracted from the rest of the environment. They are lightweight but have their own filesystem, CPU, and memory. 

Kubernetes ensures provisioning, scalability, and HA for the application and manages the complete lifecycle of containerized applications by automating it. It sits on top of VMs and creates a cluster of servers – virtual or physical. Kubernetes accesses the image registry like Dockers to pull out an image to spin up a container, and master orchestrates and coordinates the cluster. Nodes are where the applications are run.

The greatest advantage of Kubernetes is the portability, which means a container can be deployed on any cloud. Portability prevents vendor lock-in and helps reduce the age old complaint of ‘it-works-on-my-machine.’ Development teams can also pick up whatever tools and libraries they deem necessary to build a microservice, which runs on the container and improves productivity, which is important to scaling startups

Attributes of a pod

  • Pods – train cars in our analogy above – are where the container is deployed. With its own unique IP address, a pod comprises one or more containers. 
  • A pod is a single unit managed by Kubernetes. 
  • Pod is defined by a YAML script and hosts the application instance. 
  • A Pod typically represents a group of application containers that share CPU, networking, storage and image. The containers in a pod share the IP address of the pod. 
  • Throughout its lifetime, each pod is tied to a node, and contains information about each container image version and how to run it. 
  • Pods are ephemeral - they get terminated and recreated all the time.


  • Containerized applications are going to cause the next big wave in the technology ocean replacing monolithic application architecture as the de-facto architecture choice.
  • Containers are the base unit in the Kubernetes universe and run a complete application or a piece of an application. The primary purpose of Kubernetes is to ensure that the container is healthy and running by monitoring it all the time, as the purpose of the train is to get passengers from place to place safely and efficiently. Containers are similar to VMs conceptually, but they provide virtualization of the O/S. 
  • A single VM can host multiple containers just as a single Baremetal machine can host multiple VMs. Containers work in complete isolation as if the other containers do not exist, and perform only single tasks. This results in faster execution, lower costs, and shared storage. All that a container requires to exist is a CPU and some memory.
  • The application code and the libraries get packaged in a Docker image, which is pulled to create containers that ensure the same behavior across all platforms or clouds.
  • Large applications can be broken down into logical standalone pieces and run in containers. Doing so has given rise to microservices architecture.
  • With containers, developers can package the application - along with its runtime dependencies like libraries and versions - into a manifest, which can be deployed anywhere. 

A typical YAML deployment file would look something like this:

apiVersion: apps/v1
kind: Deployment
  name: httpd-deployment
  replicas: 3
      app: httpd
        app: httpd
        - image: "phx.ocir.io/gse00014407/firstrepo/httpd:latest"
          imagePullPolicy: Always
          name: httpd
            - containerPort: 80
              name: httpd
              protocol: TCP
        - name: ocirsecret

Oracle Cloud Infrastructure provides Kubernetes and Container Engine for Kubernetes, which is a fully-managed, scalable, and highly available cloud service. You can also choose to build your own Kubernetes setup on Oracle Cloud Infrastructure. Users can access the Container Engine for Kubernetes to define and create Kubernetes clusters using the Console and the REST API. You can then access the clusters you create using the Kubernetes command-line (kubectl), the Kubernetes Dashboard, and the Kubernetes API.


Join the discussion

Comments ( 1 )
  • Rengarajan Bashyam Wednesday, September 16, 2020
    Succinct blog. Thanks for this. We are working with Kubernetes for over two years now but I always look for refreshing perspectives on this exciting technology, like this blog. I would say that for us life without Kubernetes is unimaginable now. It's a tectonic shift the way we developed, tested, distributed, and deployed our code. Makes the Edge vs Cloud a non-issue by deploy-anywhere paradigm.
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.