January/February 2026
Welcome to our first CKS Roundup. We are investing deeply in our Containers, Kubernetes, and Serverless (CKS) services to make OCI the best place to run these workloads — securely, reliably, and at scale. We plan to tell you that story here, in this curated collection of blogs, where each one brings you important updates around OCI’s CKS capabilities. The thread through these updates is momentum: steadier endpoints with smarter networking, safer day‑two operations, faster upgrades and repairs, and a clear runway for AI on Kubernetes. We hope you’ll be able to quickly see what changed, why it matters, and what you can try next.
- OKE & Kubernetes AI Conformance: Standards you can build AI on. OKE is among the first platforms certified under the CNCF’s Kubernetes AI Conformance Program, a community standard for running training and inference on Kubernetes. Certification signals support for the building blocks practitioners care about—accelerators, scheduling, topology awareness, monitoring—and reflects OCI’s investment in open, high‑performance GPU infrastructure. If you’re planning production AI on Kubernetes, this is your green light to evaluate on OKE with confidence. Read the full post: OCI Kubernetes Engine (OKE) Achieves Kubernetes AI Conformance
- OKE and NVIDIA NIM: A practical path to enterprise LLM inference. If the question is “How do we run LLMs reliably, securely, and at scale?”, this post offers a practical answer: NVIDIA NIM microservices on OKE. You get prebuilt, GPU‑optimized inference containers paired with OCI’s accelerator shapes, Kubernetes autoscaling and load balancing, and native logging/monitoring—all inside your VCN with Vault‑managed secrets. It’s a production path for chatbots, copilots, and RAG pipelines that values performance and operational sanity in equal measure. Start here to stand up your first NIM endpoint on OKE. Read the full post: Running NIM on OKE: A Scalable Foundation for Enterprise-Grade LLM Inference
- Mixed Node Clusters: The successful mashup is ready. OKE’s new mixed node support lets you combine managed nodes, virtual nodes, and self‑managed nodes in a single Kubernetes cluster—so stateful databases land on predictable managed nodes, elastic frontends burst on serverless virtual nodes, and high‑performance AI/ML jobs get the custom tuning of self‑managed nodes. The payoff is simpler operations, better utilization, and freedom to place each workload where it thrives without multiplying clusters. Read the full post: OCI Kubernetes Engine (OKE) Introduces Support for Mixed Node Clusters
- Non‑destructive worker node updates: Keep the node, change the boot. With boot volume replacement, OKE lets you update managed worker nodes—including bare metal and GPU shapes—without terminating the underlying instance. You can roll forward Kubernetes versions, host OS images, metadata, SSH keys, or boot volume size while OKE cordons and drains pods to honor your availability settings. It’s faster than terminate/replace, avoids capacity surprises, and keeps OCIDs and IPs stable—great for AI/ML fleets and tight change windows. Read the full post: Non-Destructive Kubernetes Worker Node Updates
- Worker Node Updates Via API: Fix it fast, keep the node. OKE now supports node repair actions—reboot and boot volume replacement—for both managed and self‑managed nodes, on VMs and bare metal, via the OCI API or directly through the Kubernetes API. These actions respect eviction grace periods, Pod Disruption Budgets, and maxUnavailable so you can balance speed. Use them to clear GPU/driver issues, undo configuration drift, or restore known‑good images—without swapping instances or changing addresses. It’s a faster path to healthy clusters. Read the full post: Kubernetes Worker Node Repair
- New OKE Load Balancer Support: Steadier addresses, calmer cutovers. OKE load balancers get smarter defaults and steadier addresses—that’s the theme of this trio of updates. OKE now lets you set cluster‑level default backend NSGs (with a simple annotation to opt in), pin public endpoints with a reserved‑ips annotation, and assign specific private IPv4/IPv6 addresses for NLBs. The net effect is fewer change tickets, faster cutovers, and cleaner manifests that are easy to audit. If you’ve been juggling security lists, DNS, and firewall updates, this post shows the calmer way forward. Read the full post: What’s new for load balancers on OKE – governance by default, predictable IPs, cleaner manifests, and compartment annotation
- Generic VNICs with OKE: When one VNIC isn’t enough. Generic VNIC Attachment (GVA) gives OKE the kind of network control power users have been asking for: you decide how many VNICs a node gets, tune each one (subnet, IP family/count, NSGs, tags), and even pin specific pods to specific VNICs. That unlocks clean isolation between teams, traffic splits for prod vs. dev, smarter IP consumption, and on bare metal, real multi‑NIC throughput. Pick your performance mode (VFIO, SR‑IOV/E1000, or paravirtualized) and build to fit. GVA is in limited availability for enhanced clusters—ask your Oracle account team to get started. Read the full post: Introducing Generic VNIC Attachment: A New Era of Network Flexibility for OCI Kubernetes Engine
- Oracle Functions Architect Spotlight: Thank you, Lavanya Siliveri. Great products get better when practitioners share what works—and when they write it down so the rest of us can reuse it on Monday morning. This month we’re tipping our hat to Oracle Cloud Architect Lavanya Siliveri, whose clear, hands‑on posts help teams move real workloads onto Oracle Functions with fewer surprises. Her writing bridges the gap between architecture slides and production: practical migration patterns, crisp guidance on scaling and limits, and cleaner ways to build Functions you can maintain. We’ll spotlight more customers, partners, and architects in future editions—if you’ve got a story worth sharing, stay tuned. Explore Lavanya’s recent work: Lavanya Siliveri’s Blog
We have much more planned for 2026, and we’ll bring you the highlights as they land. If something here sparked an idea—or a new project or migration—don’t be shy about reaching out and letting us know. We’d love to feature your story in a future blog/the next CKS Roundup!
