At Ravello Systems, our mission is to make the cloud look more like the datacenter. We are very passionate about our mission, and went to great lengths to develop technology to realize it. After three years of hard work, the result is a sophisticated technology stack that we sometimes affectionately call virtualization 2.0. This stack includes an entirely new breed of hypervisor, a software defined network and a storage virtualization solution.
This blog post is the first part of a two-part series about networking in the cloud. Networking is a very important part of making the cloud look more like the datacenter, exactly because this is the area where there are usually the most differences. In this first part I will talk about a low-level networking layer called "layer 2".
When networking people talk about "Layer 2", or L2 as they prefer to call it, they refer to the so-called "data link layer" in the OSI model. If you're not a networking guy or gal this might be an unfamiliar term, so let's spend some time explaining it.
Layer 2 is the layer below layer 3 (no surprise here!). Layer 3 is where the all-important Internet Protocol (IP) lives. Layer 3 is also called the "network layer" in OSI terms.
So what does this layer 2 thing do? It's actually pretty simple. To explain it, let's look at what happens when you use the web browser on your computer to access a web site at the other side of the world. Your computer requests the contents of the web page, and the remote server sends it. The data that is exchanged is transferred via the IP protocol. Because the IP protocol can only send a limited amount of data at a time, the data is split in small chunks called "packets". These packets travel from your computer to your ISP, then probably to an internet exchange, and a transit provider. The transit provider sends the packet to the right location on the globe, where it is then sent to progressively more specific destinations until it reaches the web server. The point here is that each packet needs to be received and retransmitted many times before it reaches its final destination. The devices that receive and retransmit these packets are called routers. Each router on the path from source to destination is also called a "hop".
Now, layer 2 is what actually transmits the packet from hop to hop. This works are follows. Each packet on layer 3 is embedded in a packet at layer 2. The layer 2 packet includes a header and sometimes a trailer, and is therefore slightly bigger than the layer 3 packet it encapsulates. Layer 2 then sends the packet to its next destination. The next destination extracts the layer 3 payload from the layer 2 packet, and repeats the process.
From the description above, it should be clear that layer 2 is an absolutely essential layer in the network, and the Internet wouldn’t work without it. (L2 in its turn uses L1 to transmit the actual bits using electric fields or light waves according to a "physical layer" specification. And below L1 there is ... nothing!).
Layer 2 a is lesser known network layer because usually only network people care about it. Programmers and IT people are not normally aware of any details of layer 2 because it is something that is there and "just works" (until it doesn't, or until the abstraction breaks).
A few more quick facts to finalize this introduction to layer 2: The most frequently use layer 2 network (by far) is Ethernet. To distinguish packets on layer 2 from those on layer 3, a layer 2 packet is called a "frame". And finally an "ethertype" is a field in the header of an Ethernet packet that indicates the type of payload. The most used Ethertype is 0x800 which means an IP payload.
Without a doubt, the biggest difference between networking in the datacenter and cloud networking is the extent to which computers can interact with layer 2.
In the datacenter, there's usually full access to layer 2. This means that a server, via its network card, can send (and receive) arbitrary frames to and from the network, without any filtering. In the public cloud, this is not the case. Virtual Machines in the cloud still have virtual Ethernet adapters that connect to a virtual L2 network. And the cloud itself obviously has an L2 network as well. What's different is that the frames that are sent and received are heavily filtered. All major clouds, including Amazon EC2, Amazon VPC, Google Compute Engine and Microsoft Azure, allow only unicast datagrams with IP payloads. Broadcast datagrams and non-IP payloads are not allowed (with very limited exceptions to make parts of the essential ARP and DHCP protocols work).
So what gives? Well, let's look at at a few applications that require more than just the "unicast with IP payload”:
At Ravello, we realized that having full unfiltered access to layer 2 is very important. Full L2 access is available in the datacenter, and various applications require it. Therefore it should also be available in the cloud. Without full L2 access, the cloud network would not look like the datacenter network.
In order to provide full L2 access, Ravello has developed a Software Defined Network (SDN) that runs on top of the cloud provider network. The SDN implements an overlay network that encapsulates the layer 2 Ethernet frames and sends them as regular unicast IP packets so that they don’t get blocked by the cloud provider. Virtual machines that run as part of a Ravello application have virtual network adapters that are connected to this overlay network, and not to the actual cloud provider network. The overlay network is 100% L2 clean, meaning that arbitrary L2 frames may be sent and received by all VMs inside a Ravello application.
Above is a screenshot that shows how non-IP frames can be sent and received between two VMs in a Ravello application. In this case I sent a "wake on LAN" packet (ethertype 0x842) from one VM to the Ethernet broadcast address, and used "tcpdump" in another VM to show it. This specific example shows that our overlay network gives unfiltered access to layer 2.
In the next post in this series I will talk about IP multicasting. I will also give a more useful technology demo that requires L2 access: sharing a virtual IP between two hosts to create a highly available load balancer.