X

Pat Shuff's Blog

  • Iaas
    August 18, 2016

Ravello cloud virtualization

Yesterday we talked about what it would take to go from a bare metal solution or virtualized solution in your data center to a cloud vendor. We found out that it is not only difficult but it requires some work to make it happen. There are tools to convert your VMDK code to AMI with Amazon or VHD with Microsoft or tar.gz format with Oracle. That's the fundamental problem. There are tools to convert. You can't just simply pull a backup of your bare metal install or the VMDK code and upload and run it. Java ran into this problem during the early years. You could not take a C or C++ bundle of code, take the binary and run in on a Mac or Linux or Windows. You had to recompile your code and hopefully the libc or libc++ library was compatible from operating system to operating system. A simple recompile should solve the problem but the majority of the time it required a conditional compile or different library on a different operating system to make things work. The basic problem was things like network connections or reading and writing from a disk was radically different. On Windows you use a forward slash and on Linux and MacOS you use a backslash. File names and length are different and can or can't use different characters. Unfortunately, the same is true in cloud world. A virtual network interface is not the same between all of the vendors. Network storage might be accessible through an iSCSI mount, an NFS mount, or only a REST API. The virtual compute definition changes from cloud vendor to cloud vendor thus creating a need for a virtualization shim similar to a programming shim as Java did a few decades ago. Ravello stepped in and filled this gap for the four major cloud vendors.

Ravello Systems stepped in a few years ago and took the VMDK disk image proposed by VMWare and wrote three components to virtualize a cloud vendor to look like a VMWare system. The three components are nested virtualization, software defined networking, and virtual storage interfaces. The idea was to take not only a single system that made up a solution but a group of VMWare instances and import them into a cloud vendor unchanged. The user took a graphical user interface and mapped the network relationships between the instances and deployed these virtual images into a cloud vendor. The basis of the solution was to deploy the Ravello HVX hypervisor emulator onto a compute instance in the cloud vendor for each instance then deploy the VMWare VMDK on top of the HVX instance. Once this was done the storage and network interfaces were mapped according to the graphical user interface connections and the VMDK code could run unchanged.

Running a virtual instance unchanged was a radical concept. So radical that Oracle purchased Ravello Systems early this spring and expanded the sales force of the organization. The three key challenges faced by Ravello was that 50% of the workloads that run in customer data centers do not port well to the cloud, many of these applications utilize layer 2 IP protocols which are typically not available in most cloud environments, and VMWare implementations on different hardware vendors generate different virtual code and configurations enough to make it difficult to map it to any cloud vendor. The first solution was to virtualize the VMWare ESX and ESXi environment and layer it on top of multiple cloud vendor solutions. When an admin allocates a processor does this mean a thread as it does in AWS or a core as it does in Azure and Oracle? When a network is allocated and given a NAT configuration, can this be done on the cloud infrastructure or does it need to be emulated in the HVX?

The nested virtualization engine was designed to run VMWare saved code natively without change. Devices from the cloud vendor were exposed to the code as VMWare devices and virtual devices. The concept was to minimize the differences between different cloud solutions and make the processor and hypervisor look as much like ESX and ESXi as possible. HVX employs a technology called Binary Translation to implement high-performance virtualization that does not require these virtualization extensions. When virtualization extensions are available, the easiest way to implement the illusion is using "trap and emulate" .Trap and emulate works as follows. The hypervisor configures the processor so that any instruction that can potentially "break the illusion" (e.g., accessing the memory of the hypervisor itself) will generate a "trap". This trap will interrupt the guest and will transfer control to the hypervisor. The hypervisor then examines the offending instruction, emulates it in a safe way, and then it will allow the guest to continue executing. HVX, the Ravello hypervisor, uses a technology called binary translation. Unlike the trap-and-emulate method, binary translation does work when virtualization extensions are not available.

Pure L2 access is difficult and VLANs, span ports, broadcast/multicasting usually do not work. Ravello allows you to run existing multi-VM applications unmodified in the cloud, not just single virtual machines. To make this possible, Ravello provides a software-defined network that virtualizes the connectivity between the virtual machines in an application. The virtual network is completely user-defined and can include multiple subnets, routers, and supplemental services such as DHCP, DNS servers and firewalls. The virtual network can be made to look exactly like a datacenter network. The data plane of the virtual network is formed by a fully distributed virtual switch and virtual router software component that resides within HVX. Network packets that are sent by a VM are intercepted and injected into the switch. The switch operates very similar to a regular network switch. For each virtual network device, the virtual switch creates a virtual port that handles incoming and outgoing packets from the connected virtual NIC device.

Ravello’s storage overlay solution focuses on performance, persistence and security. It abstracts native cloud storage primitives such as object storage and various types of block devices into local block devices exposed directly to the guest VMs. Everything from the device type and controller type to the location on the PCI bus remains the same. Hence it appears to the guest as-if it was running in its original data-centre infrastructure. This allows the guest VM to run exactly as is with its storage configuration as if it was running on premises. Cloud storage abstraction (and presentation as a local block device), coupled with the HVX overlay networking capabilities allows for running various NAS appliances and their consumption over network based protocols such as iSCSI, NFS, CIFS and SMB. These block devices are backed by a high performance copy-on-write filesystem which allows us to implement our multi-VM incremental snapshot feature.

We could walk through a hands on lab developed by the Ravello team to show how to import a Primavera on site deployment into the Oracle Compute Cloud. The block diagram looks like the picture shown below. We import all of the VMDK files and connect the instances together using the GUI based application configuration tool.



Once we have the instances imported we can configure the network interfaces by adding a virtual switch, virtual gateway, virtual nic, assigning public IP addresses, and adding a VLAN to the configuration.




Ravello allows us to define features that are not supported with cloud vendors. For example, Amazon and Microsoft don't allow layer 2 routing and multicast broadcasting. VMWare allows for both. The HVX layer traps these calls and emulates these features by doing things like ping over TCP or multicast broadcasts by opening connections to all hosts on the network and sending packets to each host. In summary, Ravello allows you to take your existing virtualization engine from VMWare and deploys it to virtually any cloud compute engine. The HVX hypervisor provides the shim and even expands some of the features and functions that VMWare provides to cloud vendors. Functions like layer 2 routing, VLAN tagging, and multicast/broadcast packets are supported through the HVX layer between instances.

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.Captcha