Karpenter provider for OCI for OKE

Today we are announcing the general availability of the Karpenter Provider for OCI along with its open source release on GitHub. If you run workloads in OCI Kubernetes Engine OKE this release gives you autoscaling that is faster more flexible and easier to manage without having to define fixed shape managed node pools up front.

The problem

Kubernetes current node-level autoscaling model (i.e. Cluster Autoscaler) is built around managed node pools that use a single compute shape. That works, but it also creates real limits for customers running production workloads.

If a team needs different shapes for different workloads, they must create and manage many node pools. If one preferred shape is out of capacity, scaling can get stuck even if another valid shape is available. To avoid that, teams pre-create extra pools and keep more capacity around than they really need. Over time, that adds cost and operational work.

Customers want autoscaling that responds to application demand without forcing platform teams to pre-plan every node pool, shape, and fallback path.

The solution

The Karpenter Provider for OCI brings Karpenter’s flexible autoscaling model to OKE.

Instead of scaling a predefined node pool, Karpenter watches for unscheduled pods and provisions the right compute shape in real time, based on workload requirements and the set of OCI capacity options the administrator allows. Administrators define a broader capacity policy once, and Karpenter handles node selection and provisioning as demand changes.

Why this matters for OCI customers

This release makes Kubernetes autoscaling on OCI simpler, more flexible, and more efficient.

Instead of managing a growing set of static managed node pools to cover different shapes, architectures, or purchase models, teams can define a broader policy and let Karpenter choose from the allowed options at runtime. That reduces planning overhead for platform teams and makes it easier to support changing workload needs.

It also improves resiliency during scaling. If one shape is not available, Karpenter can move to the next best fit from the allowed options instead of stopping at the boundary of a single managed node pool like with Cluster Autoscaler.

With the Karpenter Provider for OCI, customers can now:

  • Provision nodes just in time based on real workload demand
  • Reduce the number of static managed node pools they need to manage
  • Use a wider mix of OCI compute options in one policy
  • Improve cost efficiency by consolidating empty or underused nodes
  • Scale across different shapes and architectures more easily
  • Improve node lifecycle management through continuous reconciliation and automated replacement when node configuration drifts

For platform teams, that means less time spent planning node pools up front and less time managing node sprawl later. It also means a clearer path to keeping clusters efficient and aligned with desired configuration over time.

For application teams, it means they can keep using normal Kubernetes requests, selectors, taints, and affinities without waiting on the platform team to create a new pool every time a workload changes.

How it works

The Karpenter Provider for OCI uses two main concepts: NodePool and OciNodeClass.

A Karpenter NodePool defines the flexible policy. This is where an administrator specifies what kinds of compute are allowed, such as instance families, architectures, availability domains, and purchase options like on-demand or preemptible.

An OciNodeClass defines the OCI-specific configuration. This is where developers set details like compartment, subnet, network settings, security groups, and image configuration.

That separation is important as the NodePool defines the “what” and the OciNodeClass defines the “how.” Once those are in place, Karpenter watches for unscheduled pods and provisions nodes automatically. Developers do not need to know anything about the underlying node setup. They just deploy their workloads using standard Kubernetes specs, and Karpenter handles the provisioning behind the scenes.

Built for OCI

The Karpenter Provider for OCI is designed to work with OCI-specific infrastructure and OKE-specific workflows. At a high level, this release supports a range of OCI-native capabilities including:

  • Integration with OKE
  • Support for OCI flexible compute shapes
  • Support for preemptible capacity
  • Support for OKE images
  • OCI IAM integration through workload-aware access patterns
  • Networking support for OCI environments, including OCI VCN CNI and secondary VNIC configurations
  • Support for capacity reservations
  • Support for cluster placement groups
  • Support for compute clusters
  • Customization options for node behavior, including bootstrap and kubelet-level configuration

Easier adoption

Karpenter Provider for OCI can coexist with existing Cluster Autoscaler managed capacity during migration, but customers should avoid letting both autoscalers respond to the same unscheduled pods. The safest migration pattern is to separate ownership clearly: keep a small baseline of existing managed node pool capacity for critical system workloads, and introduce Karpenter first for a defined set of workloads using labels, taints, and node affinity. Once those workloads are running as expected on Karpenter provisioned nodes, customers can scale down or disable Cluster Autoscaler for the migrated capacity. This reduces the chance of both autoscalers racing to provision nodes for the same demand signal.

Customer perspective

We gave early access of Karpenter provider for OCI to one of our customers, GoTo. Here’s what they had to say:

“Karpenter helps us simplify autoscaling on OCI by replacing a lot of manual node pool planning with a more dynamic provisioning model (multi-shape, multi-AD). For our use case, that means better and faster fallback behavior, clearer ownership boundaries between platform and workloads, and a more scalable path for onboarding new capacity.”

Getting started

Getting started with the Karpenter Provider for OCI begins with the official documentation, which walks through prerequisites, installation, configuration, and usage.

In general, the setup flow looks like this:

  1. Start with an existing or new OKE cluster.
  2. Review the prerequisites and installation guidance for Karpenter Provider for OCI.
  3. Configure the required IAM policies and access model so the provider can manage OCI resources appropriately.
  4. Install the Karpenter provider from Github repository.
  5. Define the Kubernetes and OCI configuration needed for provisioning, including resources such as NodePool and OCINodeClass.
  6. Begin launching workload-aware capacity on OCI based on the needs of your cluster.

Conclusion

The Karpenter Provider for OCI gives OKE customers a more flexible way to scale by matching capacity to workload needs in real time while reducing node pool planning and management overhead.

To learn more start with the official documentation and visit the GitHub repository. Begin with a small set of workloads evaluate how Karpenter behaves in your environment and expand from there.

Resources