The AI revolution is accelerating like never before and so is the demand for the critical infrastructure that makes it possible. Oracle is building hundreds of new data centers (cloud regions) to meet growing demand for our AI and cloud services.

We’re creating and maintaining some of the largest data centers in the world along with smaller dedicated cloud regions. 

We’ve already launched 16 cloud regions in the last 12 months alone, with far more coming this financial year. This is because we offer distributed cloud options, including Oracle Alloy and OCI Dedicated Region. Plus, we continue to invest in new public cloud regions all over the world—from Japan, Morocco, and Spain to the United States and beyond. 

It’s a lot of growth and with it come incredible career opportunities for data center technicians, operations experts, and managers. 

The infrastructure behind AI 

Oracle Cloud Infrastructure (OCI) is the cloud solution of choice for AI startups who need massive amounts of computing superclusters to run their workloads.

In fact, some of our new data centers house an OCI Supercluster where thousands of computers work together to solve complex problems. This is the technology that makes generative AI models possible and enables them to do the amazing things that they do. 

It promises to revolutionize the way startups and enterprises approach AI model building and training. With its purpose-designed AI capabilities, OCI offers unprecedented speed and reliability, regardless of the distributed cloud location. 

Technical edge

The OCI Supercluster scales up to 32,768 GPUs today and will reach 65,536 GPUs in the future, leveraging RDMA cluster networking and local storage to achieve rapid training and inferencing on large-scale AI models.

OCI Compute provides industry-leading performance and value for bare metal and virtual machine instances powered by NVIDIA GPUs—perfect for a wide range of AI applications, including computer vision, natural language processing, and recommendation systems—as well as generative AI. Put simply? This means more processing performance and potential in one place. And we can simulate the kind of massive training workloads that we need for these new AI models more easily.

Customer needs

We’re not only building bigger superclusters and more cloud regions to meet AI demand—we’re also developing smaller clouds with a minimum footprint, meeting our customers wherever their needs are. 

This means our customers don’t have to compromise service for flexibility. They get the same OCI services whether they choose public cloud, multicloud, sovereign cloud, or dedicated cloud.

Beyond expanding our cloud offerings, we’re also leading the way in multicloud partnerships—like those with Microsoft Azure and Google Cloud—that give customers choice and flexibility.

It’s an exciting time to be part of our story and it’s going to take talented people to help write the next chapter. Explore our latest OCI data center roles today to create the future with us.