Kubernetes at scale just got easier with new Oracle Container Engine for Kubernetes enhancements

March 20, 2023 | 6 minute read
Mickey Boxell
Product Management
Text Size 100%:

After Linux, Kubernetes is the second-fastest growing project in the history of open source software. Less than a decade old, Kubernetes has gone mainstream, seeing unprecedented growth in adoption, particularly in the last couple of years.

Oracle Cloud Infrastructure (OCI) has seen this momentum as well. Thousands of our enterprise customers across all industries are experiencing the benefits of cloud native application development, leading them to hasten their adoption and scale their usage of Kubernetes. Increasingly, customers are standardizing on Kubernetes and expanding beyond only running containerized applications to using Kubernetes for extract, transform, load (ETL) jobs, pipelines, high-performance computing (HPC) workloads, and even databases, all running on Oracle Container Engine for Kubernetes (OKE).

Soundhound logo   "We run billions of voice AI queries on OCI, using a mix of Kubernetes infrastructure with OKE, GPUs, HPC, and modern developer services including streaming, OpenSearch, and more. We chose OCI as the best cloud to train and deliver our next generation AI application – helping us provide the fastest and most accurate voice experiences to global brands including Mercedes Benz, Toast, VIZIO, Hyundai, and others. With OCI, we have seen a 50-60 percent performance boost compared to our previous cloud, along with 2x cost reduction – all while doubling our usage."

- James Hom, Chief Product Officer at SoundHound. 

Kubernetes adoption continues to accelerate. Gartner estimates that, by 2024, more than 75% of enterprises can be run Kubernetes in production—a jump from 40% in 2021. Even more telling, according to Gartner, that number might reach more than 90% by 2026.

Kubernetes environments are notoriously complex to manage at scale, with Day-2­ operations being a key challenge for customers. As OCI customers continue to continue to scale their Kubernetes footprint, we’re excited to announce several new enhancements to OKE that dramatically simplify their operational experience when running enterprise Kubernetes at scale, including the following examples:

  • Serverless Kubernetes with virtual nodes, preannounced at our recent CloudWorld conference, enables customers to ensure reliable operations at-scale without the complexities of managing, scaling, upgrading, and troubleshooting the underlying Kubernetes node infrastructure. Virtual nodes enable granular pod-level elasticity with usage-based pricing. This feature improves resource utilization by provisioning right-sized compute on demand as pods are scheduled enabling customers to optimize the cost of running Kubernetes workloads at scale.

  • Add-on lifecycle management provides customers greater flexibility to install and configure their chosen operational software or related applications. Add-ons include essential Kubernetes software deployed on the cluster, such as CoreDNS and kube-proxy, and access to a growing portfolio of related applications, such as the Kubernetes Dashboard, Oracle Database Operator, and more. The service manages the full lifecycle of the add-on software, from the initial deployment and configuration, through ongoing operations, including upgrades, patching, scaling, rolling configuration changes, and more.

  • Workload identity enhances your security posture with the ability to specify granular identity and access management controls at the pod level versus scoping access permissions at the node level.

  • More improvements include support for larger clusters with a default of 2,000 worker nodes, the introduction of an OKE SLA based on Kubernetes API server availability, and upcoming features, such as flexible workload support using self-managed nodes, preemptible instance support, and more coming soon. These features enable advanced use cases that require more granular controls, large-scale workloads, or specialized operational software.

Simpler reliable operations at scale

The new features provide out-of-the-box capabilities that are crucial for large scale, mission-critical Kubernetes environments. By offloading the complex Kubernetes management responsibilities to the Kubernetes service, enterprises benefit from a much simpler way to ensure reliable operations at scale, while lowering the skills barrier and reducing the management burden on IT and operations teams.

This reduction means lower risk and lower total cost of ownership (TCO), saving you time and effort, enabling you to improve agility and accelerate adoption, and scaling Kubernetes across the organization.

Wiz logo   “With Oracle Kubernetes Engine we're able to quickly expand agentless scanning of workloads on OCI, which allows us to focus on delivering value to our OCI customers and fuel our continued rapid growth, rather than spend resources on infrastructure management. With this focus, we reported $100M ARR in just 18 months, becoming one of the fastest growing software companies ever.”

- Oron Noah, Director of Product Management at Wiz.

A screenshot of the Create Cluster (Custom) page showing how to simplify worker node operations using virtual nodes.
Creating a cluster with virtual nodes

More control with less risk at scale

To accommodate your specific needs, you now have more fine-grained controls over your chosen add-ons, node types, access permissions, and more. However, this freedom of choice and flexibility don’t come at the expense of operational experience or increased risk because of the complexities or scale of their Kubernetes deployments. Not only is the service managing the full lifecycle of these add-ons, but also it ensures that the out-of-the-box, opinionated configurations of the popular add-ons adhere to industry best practices to further minimize errors and ensure optimal runtime.

A screenshot of the Create Cluster (Custom) page, showing how to configure a cluster software using the add-on life-cycle management feature.
 Configuring add-ons during cluster creation

Lower Kubernetes costs at scale

You can realize compounded cost savings by reducing overall TCO of Kubernetes operations, improved scaling economics and resource utilization with virtual nodes, and OCI’s overall superior cost-performance compared to other clouds. These features add up when operating at large scale. Some customers save as much as 50% when migrating their Kubernetes applications to OCI.

For large-scale use cases, such as independent software vendors (ISVs) or software-as-a-service (SaaS) and multitenant applications, you can see further cost-of-goods-sold (COGS) and manageability improvements with a combination of the new features, such as workload identity and virtual modes. These features enable enterprises to scale Kubernetes in an optimized, cost-performant way, particularly given the current economic climate, which makes reducing cloud spend a priority.

GoTo logo   “Standardizing on Kubernetes and other IaC patterns enables us to have consistent operations across our digital portfolio. We reduced our cloud spend by migrating our video service from another cloud provider to OKE. Flex Shapes enabled us to accommodate the exact resource needs of our app, saving us a significant amount of money.”

Sebastian Daehne, director, DevOps engineering, GoTo.

Let OKE help you focus on your business

These enhancements improve the default OKE experience, enabling customers to focus on their core business rather than on operational toil. The new features benefit new and existing users alike. Enterprise customers with legacy applications can accelerate their journey to cloud native and their modernization efforts. Developers building modern applications can improve productivity and deliver innovative apps faster rather than getting buried with managing infrastructure. Other organizations can gain considerable savings when migrating Kubernetes to OCI with no change to application code or admin experience. Finally, existing OKE customers can dramatically simplify their operational experience, offloading more management responsibilities to OCI, supporting advanced use cases. and accelerating rolling out of new workloads. 

Want to learn more?

For more information on the concepts in this blog, Oracle Cloud Infrastructure, and Oracle Container Engine for Kubernetes, see the following resources:

Mickey Boxell

Product Management

Product Manager on the Oracle Containers and Kubernetes Services team.


Previous Post

Kubernetes cost comparison: Who provides the best value?

Brian Wood | 16 min read

Next Post


Serve ML models at scale with NVIDIA Triton Inference Server on OKE

Joanne Lei | 3 min read