By Aaron Ricadela
As cloud computing subsumes larger portions of scientific simulations and AI, furnishing state-of-the-art graphics chips online has become a key strategy for public cloud vendors.
Long the province of specialized processing to boost performance in video games, graphics processing units (GPUs) from Nvidia and others have emerged as important tools for tackling supercomputing jobs in genetics, engineering, and other disciplines. They’ve also become a cornerstone of machine learning, due to their ability to synthesize massive amounts of training data quickly.
GPUs work differently from general-purpose computer chips by dividing complex calculations into smaller pieces processed by hundreds of specialized “cores.” The chips’ skill at processing large amounts of data has also landed GPUs in drones, robots, and self-driving cars.
Public cloud services such as Oracle Cloud Infrastructure can deliver the latest graphics processing to scientists more quickly than waiting for technology to be delivered in university or government-procured machines. Demand for GPUs is helping push workloads to the cloud, where vendors can quickly make new hardware available at pay-for-consumption prices.
Oracle offers Nvidia’s V100 supercomputing and AI processors for data centers, featuring the Volta architecture. These complement Nvidia’s Pascal technology. Oracle also introduced high-performance networking among machines in a cluster that works with Nvidia’s powerful HGX-2 GPUs, which will be available in Oracle Cloud Infrastructure.
A Bristol University group studying vaccines is making use of GPU processing delivered through Oracle Cloud Infrastructure to generate accurate molecular models and simulations from cryogenic electron microscope data. “When we buy microscopes, we have to buy big computers,” says Christopher Woods, a research software engineer fellow at Bristol. “We all want to see proteins—you can actually see movies of them.”
Aaron Ricadela is a director of strategic communications at Oracle.