“Hyperscalers? They’re all basically the same.”

I hear this a lot, but the reality is different—cloud workloads aren’t created equal, and neither are the platforms behind them. The difference comes down to the core engineering choices that shaped them.

As someone who has spent decades working inside these hyperscale clouds, I view Oracle Cloud Infrastructure (OCI) as the result of a series of deliberate choices that set it apart and deliver meaningful advantages for our customers. Those decisions align with four key differentiators: compute, networking, storage, and consistency.

1. Compute that keeps up with AI

Earlier in my career, I led Amazon’s Kernel and Operating System team, responsible for the Linux and Xen stack that powered their virtualization environment. At the time, virtualization seemed like the default for cloud computing. It could use infrastructure efficiently to serve many customers and quickly scale that infrastructure based on demand. For a long time, the virtualization-first approach made sense.

When Oracle started designing OCI, we reconsidered some of this standard thinking. We chose to emphasize bare-metal infrastructure, giving our customers much more performance, security, and flexibility. We also moved the virtualization stack off the host entirely. This eliminated most hypervisor-related performance jitter.

Those decisions directly impacted how OCI runs today. We offer compute infrastructure that is highly efficient and cost-effective, yet capable of supporting some of the largest AI clusters in the world.

2. Cloud networking reimagined

We also took the same deliberate approach to OCI’s networking, determining that traditional cloud network architectures weren’t designed for the scale and performance demanded by Oracle AI Database workloads.

So, we made an early bet on RoCE v2 (RDMA over converged Ethernet) to give customers high-performance, multi-tenant networking at a cost comparable to traditional shared IP networks. This proved prescient, as it positioned OCI perfectly for the AI revolution by enabling us to deliver the high-throughput, low-latency inter-node communication needed for massive GPU training and inference clusters.

Recent innovations like Oracle Acceleron simplify cloud networking while dramatically boosting throughput, reliability, and security. We combine this with generous OCI data egress policies: Our customers get 10 TB per month at no charge, and low rates beyond that amount. Together, these network infrastructure decisions translate into tangible business value that we designed into OCI. We don’t believe our customers’ network performance gains should be offset by unpredictable costs.

3. Storage that works for you

We were also thoughtful in how we designed OCI’s storage services, so performance scales independently from capacity.

This way, our customers have the flexibility to meet workload demands without rearchitecting their systems. That’s only possible when it’s built into the platform from the start. It’s a simple idea, but one that removes a major source of friction for data-intensive and AI-driven applications.

4. Consistency by design

These core infrastructure decisions don’t just improve performance. They enable consistency. OCI’s architecture means the same capabilities are there in any deployment mode a customer chooses.

That’s why offerings like OCI Dedicated Regions and Oracle Alloy are possible. They allow our customers and partners to deploy the full OCI stack—infrastructure to applications—in locations they control, starting as small as three racks and scaling up to full-region capacity as needed. In the case of Alloy, partners get to add their own branding and own the customer relationship, thereby becoming cloud providers themselves.

Wherever and however OCI gets deployed, it’s functionally the same. Not a subset of our public cloud—the same 200+ AI and cloud services, performance, and operating model. Customers can focus on their workloads, and not tradeoffs over functionality and performance.

Bringing it all together for all

These four decisions—across compute, networking, storage, and consistency—come together at the data and AI layer, where their impact is especially visible in today’s demanding AI workloads.

We built our AI database to be the substrate for AI, integrating our customers’ enterprise data across systems. Our generative AI agent platform helps our customers automate processes, boost employee productivity, and improve customer satisfaction by leveraging that data.

This is all supported by our unified AI data platform. We’ve made OCI a hub for all the leading generative AI models from Cohere, Google, Meta, OpenAI, and xAI.

So, yes: Hyperscaler clouds may appear superficially similar, but the details matter. Each layer of OCI I’ve described here, from bare-metal infrastructure to AI services, involved hard choices that required deliberate engineering and architectural design. They support not just the day-to-day experience our customers get, but also what they can build.

That’s the difference between a cloud that gets our customers through today, and one built to evolve with their organization.