By Sanjay Basu, Product Management, AI/ML & Emerging Technologies
Artificial intelligence (AI) is poised to be one of the biggest things to hit the technology industry (and many other industries) in the coming years. According to Forbes, recent research predicts that by 2025, AI-driven enterprises will be up to 10 times more efficient and hold twice the market share of those that don’t adopt the technology.
New research from Gartner supports that adoption is underway: Organizations that deployed AI grew from 4 percent to 14 percent between 2018 and 2019. They used AI to add business value in the form of advanced analytics, intelligent processes, and advanced user experiences.
For many organizations the question is not whether, but when and how, to deploy AI.
For starters, organizations are making significant investments in data science teams to develop AI applications. At the same time, they must determine which infrastructure to use. Their decisions are important, as they will impact the efficiency and productivity of their IT and data science teams to effectively deliver business outcomes.
A chief consideration is computing power. While the tech industry has faced computing power challenges in the past, the computational power needed to process massive volumes of data to build an AI system and utilize techniques like deep learning is far greater than before.
To tackle the challenge, some enterprises near the start of their AI journey look to “spare cycles” in their data centers to run AI workloads or develop solutions based on a single “spare” server, workstation, or small-cluster nodes. While this approach may initially appear less costly, a single-use minimal solution may not always integrate easily with broader solutions or user-facing tools. Test configurations can become live configurations without proper controls in place.
On the other hand, investing in on-premises hardware is expensive and requires dedicated maintenance and systematic refreshes.
In both cases, multiple AI “silos” can emerge that must be developed and managed simultaneously.
For data scientists, harnessing needed computational power means reliance on cloud engineers and IT departments to provision and maintain the CPUs and GPUs that support AI application development.
And while collaboration with colleagues is a good thing, over reliance on already strapped IT resources is not. In fact, it can slow both parties down.
That’s why Oracle Cloud Infrastructure VM for Data Science and AI offers data scientists a powerful cloud-based alternative to develop AI applications quickly and efficiently. The solution is a preconfigured environment that includes a virtual machine (VM) with an NVIDIA GPU and CUDA and cuDNN drivers, common Python and R integrated development environments (IDEs), Jupyter Notebooks, and open source machine learning (ML) and deep learning (DL) frameworks. For high performant shared file system, BeeGFS client is also preconfigured in the current image.
Computational power, ease-of-use, productivity and efficiency are at the forefront of why this solution is so attractive to data scientists.
It allows them to run machine learning models in a single instance using up to eight NVIDIA or Pascal GP100 GPUs, including distributed models. Oracle’s powerful automation capabilities enable the deployment of hundreds of instances for model experimentation and testing through a single click. No more requests for provisioning. No more maintenance issues or refreshes. Just straight deployment that’s ready-to-go with the right open source frameworks installed, upgraded, and tested.
The solution includes a reference architecture that deploys Bastion host, Training Node, Inference Node, User application VM, and other components on Oracle Cloud Infrastructure. It uses a region with one availability domain and regional subnets, and the same architecture can be used in a region with multiple availability domains.
It even runs on Autonomous Linux, which mean real-time upgrades can occur without shutting down instances.
Security and data privacy are of utmost concern. Oracle Cloud Infrastructure VM for Data Science and AI includes a security-first design built for the enterprise. This means that data sets are encrypted and data privacy is protected through data minimization and transparency. In other words, Oracle has no insight into data on its cloud and is transparent about where this data is processed and stored.
Finally, data scientists can expand their compute resources by using compute autoscaling, or stop the compute instance when it’s not needed, to control costs. The VM even includes basic sample data and code for testing and exploring.
A perfect use case for Oracle Cloud Infrastructure’s VM for Data Science and AI is autonomous driving. For example, an established German automotive company that designs systems that power autonomous self-driving cars found the solution was one of the best options to handle the massive data demands and compute resources for training of millions of self-driving cars.
In India, a top conglomerate uses the solution’s GPU shapes for machine learning/deep learning training to power applications based on speech-to-text, text-to-speech and natural language processing found a significant cost and performance advantages in using the solution.
Oracle Cloud Infrastructure VM for Data Science and AI provides exceptional performance, security, and control that enables data scientists to build models and deliver business value faster. The all-in-one image provides everything a data scientist needs to get up and running quickly with pre-configured environments for deep learning that are useful in many industries across a wide range of applications.