Automating virtual IP fail-over is a critical step in keeping applications running so users maintain access in the event your cloud servers experience a problem. This article explains how to easily automate the VirtualIP failover process on Oracle Cloud Infrastructure using Linux Corosync/Pacemaker along with the command line interface (CLI). We’ll explore how you can setup a Secondary IP to automatically fail over in case of downtime so as a requirement you need to understand how Linux High Availability service works and be familiar with Oracle Cloud Infrastructure provisioning process.
Getting Started
First, let’s explore the components that will be used and that you need to be familiar with. These are main components to allow your Virtual IP to failover automatically on Oracle Cloud Infrastructure:
– Corosync and Pacemaker are Open Source services suitable for both small and large clusters and responsible for providing high availability within applications.
– Oracle Cloud Infrastructure CLI is responsible for integrating Linux Corosync/Pacemaker VirtualIP IPaddr2 resource with Oracle Cloud Infrastructure vNIC Secondary IP.
– Oracle Linux 7.4 Instances will be used to host this environment. Other Linux distros can be used as well as long as they support Linux Corosync/Pacemaker.

Preparing Instances for VirtualIP Failover
Once your Oracle Linux instances have been provisioned you will need to setup the CLI as explained in the public documentation, and install and configure your Corosync/Pacemaker Cluster along with its requirements (stonith, quorum, resources, constraints, etc). After configuring your Corosync/Pacemaker cluster and CLI, you will need to setup your VirtualIP resource. Below is a quick example about how to setup a VirtualIP resource on Corosync/Pacemaker using command line. The same process can be done through the web browser UI as well.
| sudo pcs resource create Cluster_VIP ocf:heartbeat:IPaddr2 ip=172.0.0.10 cidr_netmask=24 op monitor interval=20s |
NOTE:
The ‘cidr_netmask=24’ in the Pacemaker command is dependent on the subnet size being /24
The next step is selecting one of the Oracle Linux Corosyn/Pacemaker nodes and assigning a new Secondary IP address (172.0.0.10 will be used based on VCN 172.0.0.0/16 ). This can be done using the Oracle Cloud Infrastructure console as explained in the public documentation. The Secondary IP will be used as the Corosync/Pacemaker floating IP.
Integrating Linux Corosync/Pacemaker with Oracle Cloud Infrastructure CLI
After you have Oracle Linux Corosync/Pacemaker instances up and running along with CLI, you will need to identify the Virtual Network Interface Cards (VNICs) Oracle Cloud IDs (OCID) that will be used to automate the failover process. More details about how to do that can be found here. Write down your Cluster Nodes OCIDs, update the below CLI with your own OCIDs and run the following commands on ALL NODES to update Corosync/Pacemaker IPaddr2 resource.
The following Linux “sed” commands will update the Linux HA IPaddr2 resource to allow it not just to unload and re-load the floating IP address on a different Linux node but also to update the Oracle Cloud Infrastructure vNIC information to make sure the Cloud environment matches what your current instance network IP setup is.
| ### Back up your IPaddr2 HA resource file first before editing it sudo cp /usr/lib/ocf/resource.d/heartbeat/IPaddr2 /usr/lib/ocf/resource.d/heartbeat/IPaddr2.bck ### Make sure you replace NODEx-vNIC-OCID with your own OCIDs sudo sed -i ‘627i\##### OCI vNIC variables\’ /usr/lib/ocf/resource.d/heartbeat/IPaddr2 sudo sed -i ‘628i\server=”`hostname -s`”\’ /usr/lib/ocf/resource.d/heartbeat/IPaddr2 sudo sed -i ‘629i\node1vnic=”ocid1.vnic.oc1.phx.NODE1-vNIC-OCID”\’ /usr/lib/ocf/resource.d/heartbeat/IPaddr2 sudo sed -i ‘630i\node2vnic=”ocid1.vnic.oc1.phx.NODE2-vNIC-OCID”\’ /usr/lib/ocf/resource.d/heartbeat/IPaddr2 sudo sed -i ‘631i\vnicip=”172.0.0.10″\’ /usr/lib/ocf/resource.d/heartbeat/IPaddr2 sudo sed -i ‘632i\export LC_ALL=C.UTF-8\’ /usr/lib/ocf/resource.d/heartbeat/IPaddr2 sudo sed -i ‘633i\export LANG=C.UTF-8\’ /usr/lib/ocf/resource.d/heartbeat/IPaddr2 sudo sed -i ‘634i\touch /tmp/error.log\’ /usr/lib/ocf/resource.d/heartbeat/IPaddr2 sudo sed -i ‘635i\##### OCI/IPaddr Integration\’ /usr/lib/ocf/resource.d/heartbeat/IPaddr2 sudo sed -i ‘636i\ if [ $server = “node1” ]; then\’ /usr/lib/ocf/resource.d/heartbeat/IPaddr2 sudo sed -i ‘637i\ /root/bin/oci network vnic assign-private-ip –unassign-if-already-assigned –vnic-id $node1vnic –ip-address $vnicip >/tmp/error.log 2>&1\’ /usr/lib/ocf/resource.d/heartbeat/IPaddr2 sudo sed -i ‘638i\ else \’ /usr/lib/ocf/resource.d/heartbeat/IPaddr2 sudo sed -i ‘639i\ /root/bin/oci network vnic assign-private-ip –unassign-if-already-assigned –vnic-id $node2vnic –ip-address $vnicip >/tmp/error.log 2>&1\’ /usr/lib/ocf/resource.d/heartbeat/IPaddr2 sudo sed -i ‘640i\ fi \’ /usr/lib/ocf/resource.d/heartbeat/IPaddr2 |
NOTE:
– Replace ocid1.vnic.oc1.phx.NODE1-vNIC-OCID and ocid1.vnic.oc1.phx.NODE2-vNIC-OCID by your own OCI vNICs OCIDs.
– Replace /root/bin/ path in case you have installed Oracle Cloud Infrastructure CLI in a different location.
– Replace “node1” and “node2” hostname entries by your own Clusternodes hostname ones.
– The above example is for two Oracle Linux 7.4 Corosync/Pacemaker Cloud Nodes, it’s quite easy to adjust it for additional nodes in case your cluster has more than two nodes.
Testing the VirtualIP Failover
Your configuration is done and now it’s time to test the VirtualIP failover. You can do this by simulating a crash, disabling the node where the virtual IP was started, or simply moving the Corosync/Pacemaker VirtualIP resource through command line from one node to another one.
The below example assumes your VirtualIP (Cluster_VIP) resource is running on node1 so, to move it to node2 you need to run the following command:
| sudo pcs resource move Cluster_VIP node2 |
Demonstration
Watch this Automatic VirtualIP Failover on Oracle Cloud Infrastructure video (4 minutes) to see how the failover happens automatically in the event of downtime, without impacting your end user access.
