If you want to try out a High Availability Cluster on OpenSolaris, but don't have the physical hardware, you can easily prototype it in VirtualBox. You need only a single physical machine with an AMD or Intel processor and at least 3 GB of RAM. Even laptops work fine; I'm using a Toshiba Tecra M10 laptop.
When using VirtualBox, the "cluster" will be two VirtualBox guests. Because of a new preview feature in Open HA Cluster 2009.06, called "weak membership," you don't need to worry about a quorum device or quorum server. More on that below.
For detailed instructions on running Open HA Cluster 2009.06 in VirtualBox, I highly recommend Thorsten Frueauf's whitepaper (pdf link). This post won't attempt to be a substitute for that document. Instead, I will describe a single, simple configuration to get you up and running. If this piques your interest, please read Thorsten's whitepaper for more details.
Without further ado, here are the instructions for running a two-node Open HA Cluster in VirtualBox.
1. Install OpenSolaris 2009.06
2. Install VirtualBox
3. Create The First OpenSolaris VirtualBox Guest
4. Create a Second VirtualBox Guest
- Repeat the above procedure for the second guest (naming it something different, obviously).
4.5.(Optional) Disable Graphical Boot and Login in the Guests
Disabling the graphical login helps reduce memory consumption by the guests. Perform the following procedure in each of the two guests:
- Edit the "cluster" entry in the GRUB menu to remove the "splashimage", "foreground", and "background" lines, and remove ",console=graphics" from the kernel line:
# cp /rpool/boot/grub/menu.lst menu.lst.bak1
# vi /rpool/boot/grub/menu.lst
The diffs should be something like this (though your line numbers may vary depending on what BEs you have in the GRUB menu):
# diff menu.lst menu.bak1
> splashimage /boot/solaris.xpm
> foreground d25f00
> background 115d93
< kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS
> kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS,console=graphics
Your BE entry should look something like this:
kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS
- Disable the graphical-login/gdm service and reboot.
# svcadm disable graphical-login/gdm
# init 6
5. Configure Cluster Publisher
In order to form the cluster you'll later configure "bridged networking" for the VirtualBox guests. But once you do that, the guests won't be able to access the Internet without some additional steps that I won't document here (see Thorsten's whitepaper for details).
Thus, you need to install all the packages you'll need from the repositories before you've configured the networking.
6. Install Packages
# pkg install ha-cluster-full
I also recommend installing the COMSTAR packages now, which you'll need if you want to use iSCSI to create highly available shared storage using the directly attached disks on each node of the cluster. I'll describe this process in detail in a subsequent post.
# pkg install SUNWstmf SUNWiscsi SUNWiscsit
- You can also install applications that you'll need at this point. For example, to install Tomcat and MySQL:
# pkg install SUNWtcat
# pkg install SUNWmysql51
7. Configure Networking on the Physical Host
You can now set up the networking framework to allow the two cluster nodes (the VirtualBox guests) to communicate both with each other and with the physical host.
- First, on the physical host, create an “etherstub”. This is a fake network adapter that will let the guests and host communicate as if they were on their own subnet. The benefit of using an etherstub instead of a physical adapter is that the network communication works whether or not the host is connected to an outside network.
# dladm create-etherstub etherstub0
- Next, create five VNICs on the etherstub. These virtual NICs will be assigned to the two guests and the host. You can use different MAC addresses if you prefer.
# dladm create-vnic -l etherstub0 -m a:b:c:d:1:2 vnic0
# dladm create-vnic -l etherstub0 -m a:b:c:d:1:3 vnic1
# dladm create-vnic -l etherstub0 -m a:b:c:d:1:4 vnic2
# dladm create-vnic -l etherstub0 -m a:b:c:d:1:5 vnic3
# dladm create-vnic -l etherstub0 -m a:b:c:d:1:6 vnic4
- Still on the physical host, disable NWAM and enable network/physical:default.
# svcadm disable nwam
# svcadm enable network/physical:default
- Now you can start assigning IP addresses. This demo uses the 10.0.2.0/24 subnet for the internal communication, and leaves the 192.168.1.0/24 subnet for the external network. Pick three IP addresses: one for the physical host, and one for each of the cluster nodes. I'm using:
These are random choices. Feel free to use any IP addresses within the proper subnet.
- Configure the host's IP address on one of the VNICs. I use vnic0.
# ifconfig vnic0 plumb
# ifconfig vnic0 inet 10.0.2.97/24 up
- Make the configuration persistent across reboots:
# echo "10.0.2.97/24" > /etc/hostname.vnic0
- Add entries to /etc/inet/hosts for the two guests and the host:
Plumb and configure the physical adapter to access an external network with DHCP. This assumes the public adapter is named e1000g0. Run dladm show-link to find the name of the adapter on your system.
# ifconfig e1000g0 plumb
# ifconfig e1000g0 dhcp start
- Make it persistent:
# touch /etc/hostname.e1000g0 /etc/dhcp.e1000g0
Add dns to /etc/nsswitch.conf:
# grep dns /etc/nsswitch.conf
hosts: files dns
8. Configure Networking in the First Guest
As described earlier, you need to use "bridged networking" for the guests, which gives the guests emulated physical adapters that run on VNICs on the host. You need to give each guest two adapters – one for the public network and one for the cluster private network. Note that you can't use VNICs inside the guests because they don't work inside VirtualBox.
- While the guest is shut down, select it, and select the "Settings" button to edit it.
- Select the "Network" settings.
- Select "Adapter 1" and change "Attached to" to "Bridged Networking". Click the little screwdriver icon to the right of the selection box, and select the VNIC to use (I suggest vnic1) and fill in its MAC address. You can use dladm show-vnic on the host in case you forgot the MAC you chose when creating the VNIC.
- Do the same for "Adapter 2", using one of the other free vnics (vnic3 for example). For Adapter 2, you'll first need to check the "Enable Network Adapter" box, as only one adapter is enabled by default.
- Now boot the guest.
- Once booted, disable NWAM:
# svcadm disable network/physical:nwam
# svcadm enable network/physical:default
Configure the IP address you chose earlier for this cluster node on the e1000g0 adapter.
# ifconfig e1000g0 plumb
# ifconfig e1000g0 inet 10.0.2.98/24 up
# echo "10.0.2.98/24" > /etc/hostname.e1000g0
Add entries to /etc/inet/hosts:
Verify that you can connect to the physical host:
chopin# ping 10.0.2.97
On the host, verify you can connect to the guest:
host# ping 10.0.2.98
9. Configure Networking in the Second Guest
10. Configure the Cluster
11. Configure Weak Membership
If you're familiar with HA Clusters, you may notice that you haven't configured a quorum device or quorum server to break a tie and ensure that only one node of the cluster stays up in the case of a network partition. Instead, you can use "weak membership" which is a new preview features in Open HA Cluster 2009.06. Weak membership allows a two-node cluster to run without a quorum device arbitrator. Instead, you use a "ping target" arbitrator, which can be any network device on the same subnet. In the case of node death or a network partition, each node attempts to ping the ping target. If the node can ping it successfully, it stays up. As you might guess, this mechanism is imperfect, and in the worst case can lead to a split-brain scenario of both nodes providing services simultaneously, which can lead to data loss. To configure weak membership in the VirtualBox setup, you can use the physical host as the ping target.
Now your cluster is ready to use!