Move your VMware and KVM applications to the cloud without making any changes

  • September 29, 2015

VSAN 6.1 environment on AWS and Google Cloud

Install and run VSAN 6.1 environment for sales demo, POC and training labs on AWS and Google Cloud

With the new release of VSAN 6.1, quite a few people are likely interested in installing this new version, to test out the new features and to showcase their storage management products working with this new release. With Ravello, you can do this without requiring a prohibitive physical test setup (3 hosts, with SSD and storage). You can setup and run multi node ESXi environment in AWS and Google Cloud, then configure VSAN 6.1 and save the setup as a bluerpint in Ravello. If you are an ISV, you can then run your appliances directly on Ravello or on top of ESXi in this setup and build a demo environment in public cloud.

You can provide access to this blueprint to your sales engineers, who can then on-demand provision demo lab in minutes. You can also setup VSAN 6.1 virtual training labs for students on AWS and Google Cloud, without the need for physical hardware.

Setup Instructions

To set up this lab, first we start off with the following:

  • 1 vCenter 6.0U1 windows server
  • 2 clusters consisting of 3 ESXi hosts each

If you want to, you could start off with a single cluster of 3 hosts, but this setup also allows us to test integration with products like vSphere replication and Site recovery manager in the future, while also being able to expand to 4 hosts per cluster very quickly to test new VSAN features such as failure domains or stretched clusters.

Refer to following blog on how to setup ESXi hosts on AWS and Google Cloud with Ravello.

Each host has the following specs:

  • 2 vCPU
  • 8 GB memory
  • 2 NIC (1 Management, 1 VSAN)
  • 4 additional disks on top of the OS disk, 100GB each. One of these disks will be used as flash drive, the rest will serve as capacity disks

After publishing our labs and installing the software (or provision virtual machines from blueprints, I have blueprints for a preinstalled ESXi and vCenter which saves quite some time) we can get started on the configuration of VSAN.

Starting with VSAN 6.1, the only thing we actually need to do for this is to open the vSphere web client, open the VSAN configuration for the cluster and mark the first disk of each host as SSD. This is because the underlying ravello platform reports the disk to ESXi as spindle storage, and we need at least one flash disk for VSAN to work.

If you want to test the all-flash features of VSAN, you’d previously have to either use the ESXi shell/SSH or use community tools to configure SSD disks as capacity disks. With VSAN 6.1, this is all supported from the web client if you have the correct VSAN license. Still, sometimes the community tool can be useful if you have a large amount of hosts or clusters and don’t want to manually mark each disk as SSD. While you could script this yourself through powershell or SSH, the tool of choice for this is the VSAN All-Flash configuration utility by Ravlinson Rivera, published on his blog Punching Clouds.


Start by installing vSphere as normal. For vCenter, i’ve chosen to use the windows version since this is the easier one to install, but if you install the VCSA (either nested or by import an existing VCSA as OVF in Ravello) that works equally well. From an installation point of view, there is no difference between the two.

As you can see, i’ve created the following setup:

By default, VSAN disk claiming is set to automatic. If you want to ensure that new disks are not added to capacity automatically, you’ll have to set this to manual when enabling VSAN. If you do select to automatically add capacity, ensure that your disks are marked as flash and configured correctly before enabling VSAN on your cluster. For automatic assignment, follow the rest of this blog before enabling VSAN on the cluster level.

First we have to configure our second interface with a static IP address and mark the interface as usable for VSAN traffic. For each ESXi host, go to the manage tab and open Networking -> VMKernel adapter. Select the "Add Host Networking" option, choose "VMKernel Network adapter", create a new virtual switch and add the second nic (vmnic1) to the standard switch).

After this, select "Virtual SAN Traffic" under the header "available services" and configure an IP address.

Before we can start using VSAN, you’ll have to mark one (or all) of the disks as Flash. If you want to use the standard VSAN configuration, mark the first disk on each ESXi host as flash by going to the host configuration, then storage->storage devices. Select the last disk and click the “mark disk as flash” (the green square button with the f). Repeat this process for each host that you want to use in your VSAN cluster.

After marking a disk as flash on each host, you can enable VSAN. If you’ve left the VSAN settings default, the disks will automatically be consumed to create a VSAN datastore. If you’ve set the VSAN settings to only manually consume disks, you’ll need to assign the disks to the VSAN storage pool. This can be done by going into the cluster VSAN configuration, selecting disk management and clicking the “create a disk group” button for each host.

Afterwards, you should see a healthy green status and have 4 disks assigned to a single disk group on each host.

Saving your environment as a Blueprint

Once you have installed your VSAN environment, save it as a Blueprint. Then, you can share it with your team members in your sales engineering organization, training group, your customers/prospects and partners. They can then, with a few clicks provision a fully functional instance of this environment on AWS or Google Cloud for their own use. You don’t need to schedulde time on your sales demo infrastructure in advance, you can customize your dmeo scenario using a base blueprint, provision as many student training labs on-demand and pay per use.

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.