Updated with data from April 2021
Enterprise workloads are often performance sensitive. At Oracle, we designed our cloud to deliver consistently high performance. As we improve our cloud over time, we will at a minimum measure performance annually and share that data with our customers, so that you can understand what to expect. In this post, we show test results from two common benchmarks and present the methodology to re-create our results.
We measured four of our commonly used Compute instances by using two well-known and reliable benchmarking suites: UnixBench on Linux, and LINPACK on Windows. For each OS, there were two levels of workload complexity: with UnixBench, varying concurrency in the tests; with LINPACK, the number of equations solved.
We used UnixBench to test the performance of Linux instances. UnixBench is a set of tests that measure the performance of various aspects of a UNIX or Linux based system. The result is an aggregate score that measures the overall system performance as opposed to any one individual component.
The following table shows the aggregate UnixBench performance for each of the tested Compute instance shapes with single-threaded and concurrent (multi-threaded) test configurations. Higher mean/median results indicate better performance, and standard deviation is provided to show consistency (smaller is better).
Table 1A: Aggregate UnixBench Performance for Tested Compute Shapes - Intel based instances
Shape |
Test Concurrency |
Mean |
Median |
Standard Deviation |
VM.Standard2.1 |
1 |
533 |
537 |
18 |
VM.Standard2.1 |
2 |
746 |
748 |
22 |
VM.Standard2.2 |
1 |
593 |
598 |
15 |
VM.Standard2.2 |
4 |
1410 |
1413 |
18 |
VM.Standard2.4 |
1 |
623 |
627 |
14 |
VM.Standard2.4 |
8 |
2339 |
2343 |
27 |
VM.Standard2.8 |
1 |
637 |
640 |
13 |
VM.Standard2.8 |
16 |
3832 |
3842 |
50 |
Table 1B: Aggregate UnixBench Performance for Tested Compute Shapes - AMD based instances
Shape |
Test Concurrency |
Mean |
Median |
Standard Deviation |
VM.Standard.E3.Flex 1 OCPU, 16 GB |
1 |
1430 |
1434 |
32 |
VM.Standard.E3.Flex 1 OCPU, 16 GB |
2 |
2168 |
2175 |
41 |
VM.Standard.E3.Flex 2 OCPU, 32 GB |
1 |
1542 |
1574 |
67 |
VM.Standard.E3.Flex 2 OCPU, 32 GB |
4 |
3376 |
3724 |
510 |
VM.Standard.E3.Flex 4 OCPU, 64 GB |
1 |
1598 |
1600 |
56 |
VM.Standard.E3.Flex 4 OCPU, 64 GB |
8 |
5160 |
4947 |
703 |
VM.Standard.E3.Flex 8 OCPU, 128 GB |
1 |
1584 |
1584 |
36 |
VM.Standard.E3.Flex 4 OCPU, 64 GB |
16 |
7302 |
7302 |
424 |
What we saw is relatively consistent performance across these instances in the single threaded performance, and roughly proportional increase in performance as the tests are run in a concurrent configuration. Customers can expect to see similar performance for single process workloads regardless of instance type, and proportional increases when running concurrent workloads as the size of the compute instances increase.
The following graphs plots these results:
We performed these tests in different regions and found them to be highly consistent across all tested regions. Here are the results by region:
We used LINPACK to test the performance of Windows instances. The LINPACK benchmark measures how quickly a system can solve several linear equations. The results show the average number of floating-point operations per second that an instance can perform, measured in gigaFLOPS (GFLOPS). We ran it with a small workload of 2,000 equations, and with a larger workload of 40,000 equations.
The following table shows the aggregate LINPACK performance for each of the tested Compute instance shapes. A higher number indicates better performance, and a lower standard deviation indicates reduced variation between each test. Note that since we used a LINPACK binary distributed by Intel, this data is only from Intel based instances.
Table 2: Aggregate LINPACK Performance for Tested Compute Shapes - Intel based instances
Shape |
Test Size |
Mean |
Median |
Standard Deviation |
VM.Standard2.1 |
2,000 |
19 |
19 |
2 |
VM.Standard2.1 |
40,000 |
48 |
48 |
3 |
VM.Standard2.2 |
2,000 |
50 |
55 |
10 |
VM.Standard2.2 |
40,000 |
101 |
102 |
5 |
VM.Standard2.4 |
2,000 |
87 |
84 |
15 |
VM.Standard2.4 |
40,000 |
199 |
203 |
11 |
VM.Standard2.8 |
2,000 |
184 |
184 |
19 |
VM.Standard2.8 |
40,000 |
349 |
331 |
41 |
The following chart shows a summary of the average scores for small and large test sizes by instance type. The results show that the performance increases as the size of the instance increases, with a steeper increase shown for tests with a larger number of equations. You can expect the floating-point performance on Intel Xeon based virtual machine Compute instances to scale linearly with a shape’s core count.
Similar to the Linux data, the Windows results are fairly consistent across the tested regions. The following graph shows minimal variation between regions:
We intended our testing to be easily reproducible. We used standard open source benchmarks and a straightforward testing methodology.
Our performance tests used the following parameters:
To reproduce these results, you need the following items:
Note: The code to run the benchmark is provided only as an example. Although the code creates a working setup, Oracle Cloud Infrastructure doesn’t support this code.
The config.py file shows you the relevant customizations. Although you can edit the file directly, we recommend that you create an override file that contains your specific changes and then activate the file by setting the OVERRIDES environment variable, before you run any scripts.
At a minimum, modify the following parameters in the config.py file:
If you decide to launch instances with these scripts, you typically run the scripts in this order:
We’re excited to help you understand the performance that you can expect from Oracle Cloud implementations, and this post describes a transparent methodology for measuring the performance that our Compute instances deliver.
We want you to experience the features and enterprise-grade capabilities that Oracle Cloud Infrastructure offers. It’s easy to try them out with our 30-day trial and our Always Free tier of services. For more information, see the Oracle Cloud Infrastructure Getting Started guide, Compute service overview, and Compute FAQ.
Director Product Management, OCI Compute Service
Previous Post