Move your VMware and KVM applications to the cloud without making any changes

Oracle RAC DB on Ravello

Oracle RAC Overview

Oracle Real Application Clusters (Oracle RAC) is a shared cache clustered database architecture that utilizes Oracle Grid Infrastructure to enable the sharing of server and storage resources. Automatic, instantaneous failover to other nodes, and therefore enables an extremely high degree of scalability, availability, and performance.

Originally focused on providing improved database services, Oracle RAC has evolved over the years and is now based on a comprehensive high availability (HA) stack that can be used as the foundation of a data base cloud system as well as a shared infrastructure that can ensure high availability, scalability, flexibility and agility for any application in your data center.

Fig. 1: Oracle Database with Oracle RAC architecture

Need for Ravello

Enterprises run Oracle RAC as a part of bigger application in variety of scenarios. They also need such environments for development, testing, staging and running User Acceptance Tests. It is expensive to have on-premise environments for such transient needs. Ravello provides a great platform for such use-cases by offering data-center like environments on public cloud with VMware and Layer 2 networking.

An Oracle RAC database is a shared everything database. All data files, control files, SPFILEs and redo log files in Oracle RAC environments must reside on cluster-aware shared disks so that all of the cluster database instances can access these storage components. All database instances must use the same interconnect, which can also be used by Oracle Clusterware. Public cloud environments do not provide shared storage and Layer2 capabilities required by Oracle Clusterware for RAC natively. However, such functionality can be achieved using Ravello.

Oracle RAC DB on Ravello

The following implementation is broadly followed from the reference article to setup Oracle RAC DB installation in VMware ESXi environment1. The deployment diagram for the implementation:

Fig. 2: Deployment diagram


For the deployment, we have 4 configured subnets

  • – public network
  • – cluster inter-connect/private
  • – shared storage access

The ‘racnas’ node is running Openfiler 2.99.1 and is setup as an iSCSI target for the RAC nodes to connect to. Two logical volumes are setup as iSCSI targets as below:

  • ocr – for OCR and voting disk
  • data – for database (datafiles, control files, redo log files, spfile)

The database access is configured through iSCSI with Automatic Storage Management (ASM). The binaries for Grid Infrastructure and Database are stored locally on each RAC node.

Setting up imported VMs into Ravello

  1. As a first step, we import the 4 VMs that were setup on the on-prem VMware environment into Ravello’s VM Library, and then create a new application by dragging the VMs onto the canvas – two RAC nodes, one storage node – namely, ‘rnode1’, ‘rnode2’ and ‘rnas’ respectively. We also add a test node, ‘rtest’ to test overall functionality of the deployment.

    Fig. 3: Building the application with the imported VMs


    All VMs are using Oracle Linux 7.3 distribution with the RAC DB nodes configured with 4VCPUs and 16GB of memory. The storage node is running Openfiler and is configured with 4VCPUs/16GB memory, while the test node running OL7.3 is configured with 2VCPUs and 8GB of memory.

    On the network tab, Ravello automatically re-creates the underlying network by looking at the ESXi configuration files and VM disk images.

    Fig. 4: Network view of the application


    We will now make sure all the settings in each VM are as per our expectation. Let us take a look at ‘rnode1’ in the Ravello UI.

  2. Let us start with the ‘General’ tab. Make sure that the hostname field is populated and it matches to the hostname in the VM.

    Fig. 5: General tab for node1


  3. Under the ‘Disks’ tab, Controller we select is a para-virtualized controller for better performance.
    Disks tab for node1

    Fig. 6: Disks tab for node1


  4. Under the ‘NICs’ section, we select paravirtualized devices for each of the NICs for better performance. RAC requires a ‘public’ interface and a ‘cluster inter-connect’ interface per node. As pointed out earlier, we have used a separate subnet to handle shared storage traffic. We verify that all the NICs are present and configured correctly for each of the nodes with the right IP configuration.

    Fig. 7: Public interface for rnode1


    Fig. 8: Private interface for rnode1


    Fig. 9: Storage interface for rnode1


    We have provided ‘ssh’ and ‘Enterprise Manager express’ by enabling port 22 and port 5501 on the ‘Services’ tab.

    Fig. 10: External services


    Next, we ‘Edit and Verify’ all the VMs on the application in a similar fashion. Once this is done, the application is ready to be published.
  5. Publish the application to bring up the VMs in the public cloud either using ‘Cost-optimized’ or ‘Performance-optimized’ selection.

Verifying the RAC setup on Ravello

  1. Login to any node and check to see if the shared storage is mounted on the RAC nodes over iSCSI

    Fig. 11: Shared storage details


  2. Confirming that Grid Infrastructure is up and running

    Fig. 12: Grid Infrastructure status


  3. Check the RAC DB configuration by running the ‘srvctl’ command.

    Fig. 13: RAC configuration for DB check


  4. Check status of DB running on RAC

    Fig. 14: RAC DB status check


We now have a fully functional Oracle RAC environment running Oracle Database 12cR1 running on the cloud using Ravello. One can test out the deployment by connecting to the database from the test node ‘ractest’.


Free Trial

To try out your custom RAC environment on public cloud, please open a free Ravello trial account.


  1. Build Your Own Oracle RAC 11g Cluster on Oracle Linux and iSCSI
  2. Oracle Real Application Clusters (RAC)
  3. Real Application Clusters Administration and Deployment Guide



Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.