Running electronic design automation (EDA) workloads on Oracle Cloud Infrastructure

April 22, 2021 | 8 minute read
Text Size 100%:

Electronic design automation (EDA) is a category of software tools for designing electronic systems, such as integrated circuits and printed circuit boards. The tools work together in a design flow that chip designers use to design and analyze entire semiconductor chips. Because a modern semiconductor chip can have billions of components, EDA tools are essential for their design. This blog describes EDA regarding integrated circuits (ICs).

Traditionally, EDA workloads stayed on-premises. Running EDA tools on the cloud doesn’t require any special processes on the part of the user. Any EDA software that can run locally can be installed and run on the cloud.

Sample architectures for running EDA workloads on Oracle Cloud

Let’s look at the two most common architectures for running EDA workloads in the cloud.

The first reference architecture presents a simple cluster architecture with a connection to and on-premises data center. In this type of architecture, the cluster and scheduler are already running in on-premises data center and the cloud environment works as an extension of that cluster.

A graphic depicting an example hybrid cluster architecture.

The second reference architecture presents a simple cluster architecture without a connection to an on-premises cluster.

A graphic depicting an example cloud cluster architecture.

Choosing Compute instances: Bare metal versus virtual machine

Oracle Cloud Infrastructure (OCI) Compute lets you provision and manage compute hosts, known as instances. You can choose between bare metal and virtual machine (VM) instances.

EDA workloads usually benefit from the instances with best hardware resources, specifically CPUs. We recently announced availability of our Intel Ice Lake and AMD Milan shapes as virtual and bare metal instances. One exciting aspect about our newly announced VM shapes is their flexibility. Flexible shapes mean that you can decide on how many cores and how much memory you would want in a VM.

A graphic chart comparing the CPU, cores, memory, networking, and storage or VM.Optimized3.Flex and VM.Standard.E4.Flex.

The second option is to use bare metal instances. Bare metal instances provide the best resource density per instance.

A graphic chart comparing the CPU, cores, memory, networking, and storage or BM.Optimized3.36 and BM.Standard.E4.128.

Having these different shape options gives you the capability to deploy instances that aligns with your workload resource and licensing requirements. For example, you can quickly deploy VMs that exactly match the resource requirements of your EDA job. If you have a small job that requires only three cores and 13 GB of RAM or a large job that requires 64 cores and 1,024 GB of RAM, you can quickly deploy either of them.

If the licensing model for the EDA tool that you use is based on the number of hosts, you can deploy an Intel Ice Lake or AMD Milan bare metal instance and maximize the number of CPUs and memory per instance.

NUMA nodes on our X9 (Intel Ice Lake) and E4 (AMD Milan) flexible VMs are configured per socket and we limit the number of cores of a flexible VM to the number of cores per socket (18 for X9, 64 for E4). So, when you deploy an X9 or E4 flexible VM, all cores of that VM are in the same NUMA node no matter how many cores that flexible VM has.


EDA workloads present unique challenges to storage systems, mainly because of heavy metadata operations, high small file count, and high-capacity requirements. OCI addresses these challenges with multiple alternatives.

Block volumes

OCI Block Volume service lets you dynamically provision and manage block storage volumes. You can create, attach, connect, and move volumes as needed to meet your storage and application requirements. The Block Volume service uses NVMe-based storage infrastructure and is designed for consistency.

The Block Volume service supports creating volumes sized from 50 GB to a maximum size of 32 TB, in 1-GB increments. You can attach up to 32 volumes to an instance, with a maximum of 1 PB of attached volumes per instance. Latency performance is independent of the instance shape or volume size and is always submillisecond at the 95th percentile for the balanced and higher performance elastic performance options.

The higher performance elastic performance option is recommended for workloads with the highest IO requirements, requiring the best possible performance, such as large databases. This option provides the best linear performance scale with 75 IOPS per GB up to a maximum of 35,000 IOPS per volume. Throughput also scales at the highest rate at 600 KBPS per GB up to a maximum of 480 MBPS per volume.

NVMe devices

Some instance shapes, including the BM.Optimized3.36 shape in OCI, include locally attached NVMe devices. BM.Optimized3.36 has a 3.8-TB local NVMe drive. These devices provide low latency, high-performance block storage that’s ideal for big EDA workloads that can benefit from high-performance, temporary block storage.

High-performance file system alternatives

High-performance file systems support workloads that require the ability to read and write data at high throughput rates. OCI HPC File Systems (HFS), available on Oracle Cloud Marketplace, make it easy to deploy various industry leading high-performance file servers. In as little as three clicks, customers have a file server up and running at petabyte-scale with double digit gigabyte throughput.

When you deploy these HFS solutions, you can specify the requirements for your file system, including the following examples:

  • Type: Scratch or persistent

  • Workload type: Small, mixed, or large files

  • Filesystem sizing: Number of file servers and storage capacity

As you specify these parameters, the HFS Marketplace solution applies default parameters to reduce customer complexity and shorten time to deployment. For example, when building a scratch filesystem, it selects local NVMe storage. Similarly, for mixed or small file workloads, it customizes metadata. File systems can be up and running in under 15 minutes.

Using HFS Marketplace solution, you can deploy BeeGFS, Lustre, and GlusterFS in a few clicks.

HFS customizable options

If you require maximum configurability for complex deployment, we also offer Oracle Cloud Marketplace stacks (web0based GUI) and Terraform-based Oracle QuickStart deployment templates on GitHub. Customize the Terraform templates or Marketplace stacks to meet your requirements and launch through Resource Manager.

Oracle Cloud Marketplace stacks (web-based GUI)

  • BeeGFS

  • Lustre

  • GlusterFS

  • BeeOND (BeeGFS on-demand over RDMA)

  • NFS File Server with high availability

Terraform-based templates

  • IBM Spectrum Scale

  • BeeGFS

  • Lustre

  • GlusterFS

  • BeeOND (BeeGFS on-demand over RDMA)

  • Quobyte

  • NFS File Server with high availability


Oracle virtual cloud networks (VCNs) provide customizable and private cloud networks in OCI. Oracle’s highly scalable, flat network design limits the number of network hops between compute and storage to a maximum of two. Oracle doesn’t oversubscribe network resources, so customers experience a low-latency network with predictable performance.

Oracle is the only large cloud service provider to offer a performance SLA for networking. We guarantee consistent network performance for customers, so they can rely on predictable network responses to their application workloads.

For connecting your on-premises environment to OCI, we provide solutions to connect directly to your OCI VCN through dedicated, private, high-bandwidth connections, called FastConnect, or site-to-site VPN connectivity, called VPN Connect.

Security and compliance

As companies transition to the cloud for greater speed and agility, they’re also starting to see security as a cloud benefit instead of a risk. But with today’s larger and more diversified threat landscape, businesses need to be certain of the depth of their security before they trust the cloud with such important workloads. At Oracle, we anticipated this need and built our cloud from the ground up to address it. Security is especially important for EDA workloads, where it’s critical to protect the intellectual property.

OCI is a second-generation infrastructure-as-a-service, offering architected on security-first design principles. These principles include isolated network virtualization and pristine physical host deployment, which provide superior customer isolation compared to earlier public cloud designs and reduced risk from advanced persistent threats. OCI benefits from tiered defenses and highly secure operations that span from the physical hardware in our data centers to the web layer, with protections and controls available in our cloud. Many of these protections also work with third-party clouds and on-premises solutions to help secure modern enterprise workloads and data where they reside.

We provide a Terraform-based landing zone template that meets the security guidance prescribed in the CIS Oracle Cloud Infrastructure Foundations Benchmark.

To help you quickly deploy an environment that segregates access to resources based on job function, we have a customizable landing zone Terraform template that deploys a standardized environment in an OCI tenancy and meets the security guidance prescribed in the CIS Oracle Cloud Infrastructure Foundations Benchmark.

A graphic depicting the architecture for a secure landing zone that meets the CIS Foundations Benchmark.Automation

Resource Manager is an OCI service that allows you to automate the process of provisioning your OCI resources. Using Terraform, Resource Manager helps you install, configure, and manage resources through the infrastructure-as-code model.

OCI makes it easy deploy clusters quickly and easily with an Oracle Cloud Marketplace stacks that include all the key components to up and running quickly. A Terraform configuration codifies your infrastructure in declarative configuration files. Resource Manager allows you to share and manage infrastructure configurations and state files across multiple teams and platforms. With Resource Manager, you can easily automate deploying different-sized clusters for running your EDA workloads.

Why you should run EDA workloads on OCI

  • Latest generation CPUs: Oracle has partnered with Intel and AMD to offer its customers the latest generation of processors available as virtual machines and bare metal instances.

  • Flexible virtual machines: Instead of selecting one of the predefined shapes, you can now decide how many cores and how much memory a VM has.

  • Easy-to-deploy file systems: Oracle offers HPC customers an array of parallel file systems they can choose from: NFS, IBM Spectrum scale, BeeGFS, Lustre, and more. Customers can deploy HPC file systems on OCI in a few clicks.

  • High bandwidth, low latency network: Oracle offers low-latency, high-throughput networks to successfully run HPC uses cases in the cloud. OCI’s network is a non-oversubscribed, highly scalable network with approximately 1 million network ports in each availability domain, with high-speed interconnections and latency < 100μs between the hosts in an availability domain.

  • Secure by design: Intellectual property is the most valuable asset for EDA customers. OCI provides zero-trust, security-first architecture with easy-to-implement security controls.

Every use case is different. The only way to know if Oracle Cloud Infrastructure is right for you is to try it. You can select either the Oracle Cloud Free Tier or a 30-day free trial, which includes US$300 in credit to get you started with a range of services, including compute, storage, and networking.

Oguz Pastirmaci

Previous Post

Five key Oracle Integration lessons from handling billions of messages monthly

Amit Saxena | 4 min read

Next Post

How CISO Roles Have Expanded

Cassie Chen | 4 min read