OKE now supports Kubernetes 1.35, also known as “Timbernetes” or the World Tree release. It delivers smoother day‑2 operations, stronger security foundations, and smarter scheduling and networking behavior—along with a few changes you’ll want to plan for before you hit Upgrade.
Kubernetes 1.35 continues the Kubernetes community’s push toward more secure, modern defaults and streamlined operations. For OKE customers, preparation should focus on three areas: moving node pools to OL8 or later, because Kubernetes 1.35 requires cgroup v2 and OL7 is not supported; confirming your networking plans for the retirement of NGINX Ingress controller and the deprecation of kube-proxy IPVS; and maintaining an upgrade plan that keeps clusters within the supported version window so shorter lifecycle timelines do not create avoidable risk. In the sections below, we break down each of these changes and what they mean for a smooth upgrade.
What’s new? (Some Highlights)
1) Resize Pods without restarts
Kubernetes 1.35 introduces in-place Pod resource updates, allowing you to adjust CPU and memory requests/limits without restarting Pods in many scenarios. This can reduce disruption for stateful and restart-sensitive workloads. It also enables faster tuning during performance investigations and ongoing right-sizing. This is most valuable for stateful workloads and latency-sensitive services where restarts are expensive.
2) Delegated Job control with managedBy
Kubernetes 1.35 introduces the Job managedBy field, enabling clean delegation of Job reconciliation and status synchronization to an external controller (for example, multi-cluster dispatch patterns like MultiKueue). This reduces controller conflicts and simplifies multi-cluster execution models. This is especially useful if you run multi-cluster batch and want one controller to own status and coordination cleanly.
3) Reliable Pod update tracking with .metadata.generation
Kubernetes 1.35 introduces .metadata.generation, alongside .status.observedGeneration, so controllers and operators can reliably confirm when the kubelet has processed the latest Pod spec changes. This improves operational confidence for workflows like in-place updates and automated remediation. This helps automation and SRE tooling reliably detect “spec applied” vs. “spec pending,” reducing blind spots during rollouts or resizing.
What’s changing in Kubernetes 1.35?
1) cgroup v1 is removed; cgroup v2 is required
Control groups (cgroups) are a Linux kernel feature used to account for and limit resources—like CPU and memory—for processes. Kubernetes relies on cgroups through the kubelet and container runtime to enforce resource requests/limits and manage node stability. Moving to cgroup v2 brings a more unified and modern resource management model, but it also means older operating systems (OL7) that don’t support cgroup v2 can’t run newer kubelets reliably.
Why this can be disruptive: If a node OS doesn’t support cgroup v2, the kubelet may fail to start after upgrade—potentially turning an upgrade into node unavailability and workload disruption. If you have OL7, prioritize migrating those node pools to OL8 or higher.
2) Networking changes: NGINX Ingress retirement and kube-proxy IPVS deprecation
Ingress is the Kubernetes API for routing external HTTP and HTTPS traffic to services in your cluster. The key change here is not that Kubernetes 1.35 is retiring Ingress NGINX; rather, Kubernetes 1.35 arrives as the community-maintained NGINXIngress project reaches retirement. The Kubernetes community announced that NGINX Ingress would stop receiving maintenance in March 2026, after which there will be no further releases, bug fixes, or security updates. Existing deployments are expected to continue working, but customers using the NGINX Ingress controller should plan a migration to a supported alternative. For OKE customers, Oracle has already published guidance, including migration to a Gateway API controller or another supported ingress solution.
kube-proxy runs on every node and implements Kubernetes Service networking by forwarding traffic sent to a Service virtual IP to the appropriate Pods. It supports multiple backends, including iptables and ipvs, but in Kubernetes 1.35 the ipvs mode is deprecated. Kubernetes documentation describes nftables as the recommended replacement for ipvs, with iptables also a better option than ipvs on older Linux systems that cannot use nftables.
Why this can be disruptive: Networking is on the critical path for application availability. If you rely on a retiring ingress controller or IPVS mode, upgrades can require configuration changes and careful validation to avoid traffic-impacting surprises.
3) Support timelines: Kubernetes 1.32
Kubernetes version lifecycles are short, and staying current helps avoid compressed upgrade timelines later.
- Kubernetes 1.32 goes out of support 30 days after 1.35+ is released.
- If 1.35+ is expected in April 2026, plan for 1.32 end-of-support ~30 days after that release.
Recommended action items
- Inventory node OS versions across all node pools. If you have OL7, prioritize migrating those node pools to OL8 or higher (link).
- Plan upgrades around supported versions. OKE supports three active minor versions; with 1.35, that’s 1.33, 1.34, 1.35. Upgrade dev/test first, validate, then roll to production (link).
- Run a networking readiness review: confirm your ingress strategy, check for IPVS usage, and validate any traffic-handling assumptions ahead of the upgrade window (link).
