X

Move your VMware and KVM applications to the cloud without making any changes

  • September 14, 2015

LISP Leaf & Spine architecture with Arista vEOS using Ravello on AWS

Author:
Matt Conran
Matt Conran is a Network Architect based out of Ireland and a prolific blogger at Network Insight. In his spare time he writes on topics ranging from SDN, OpenFlow, NFV, OpenStack, Cloud, Automation and Programming.

This post discusses a Leaf and Spine data center architecture with Locator/ID Separation Protocol (LISP) based on Arista vEOS. It begins with a brief introduction to these concepts and continues to discuss how one can setup a fully functional LISP deployment using Ravello’s Network & Security Smart Labs. If you are interested running this LISP deployment, just open a Ravello account and add this blueprint to your library.

What is Locator/ID Separation Protocol (LISP)?

The IP is an overloaded construct and we use it to determine “who” and “where” we are located in the network. The lack of abstraction causes problems as forwarding devices must know all possible forwarding paths to forward packets. This results in large forwarding tables and the inability for end hosts to move and keep their IP address across layer 3 boundaries. LISP separates the concept of the host IP to the routing path information. The same way Domain Names System (DNS) solved the local host file problem. Its uses overlay networking concepts and a dynamic mapping control systems so its architecture looks similar to that of Software Defined Network (SDN).

Get it on Repo
REPO by Ravello Systems, is a library of public blueprints shared by experts in the infrastructure community.

LISP framework consist of data plane and a control plane. The control plane is the registration protocol and procedures, while the data plane is the encapsulation/decapsulation process. The Data Plane specifies how EID (end hosts) are encapsulated in Routing Locators (RLOCs) and the Control Plane specifies the interfaces to the LISP mapping System that provides the mapping between EID and RLOC. EID could be represented by IPv4, IPv6 or even MAC addresses. If represented by MAC it would be Layer 2 over Layer 3 LISP encapsulation. The LISP control plane is very extensible and can be used with other data path encapsulations such as VXLAN and NVGRE. Future blueprints will discuss Jody Scott (Arista) and Dino Farinacci (LISP author) workings towards a LISP control and VXLAN data plane, but for now, let's build a LISP cloud with LISP standards inheriting parts of that blueprint.

What does LISP enable?

LISP enables end hosts (EID) to have the ability to move and attach to new locators. The host has a unique address but the IP address does not live in the subnet that corresponds to its locations. It is not location locked. You can pick up the endpoint and move it anywhere. For example, smartphones can move around from Wifi to 3G to 4G. There are working solutions to operate an open LISP ecosystems (Lispers.net) that allows IP address to move around the data center and across multi vendors, while keepings its IP address. No matter where you move to the endpoint IP address will not change.

At an abstract layer the EID is the “who” and the Locator is the “where the who is”.

Leaf & Spine Architecture

Leaf and Spine architectures are used to speed up connectivity and improve bandwidth between hosts. CLOS (Common Lisp Object System) is a relatively old concept but it does go against what we have been doing in traditional data centers.

Traditional data centers have three layers – core, aggregation and access layer with some oversubscription between the layers. The core is generally Layer 3 and access being Layer 2. If Host A needs to communicate with Host B, the bandwidth available to that host depends on where the hosts are located. If the hosts are connected to the same access (ToR) switch, traffic can be locally switches. But if a host needs to communicate to another host via the aggregation or core layer it will have less bandwidth available due to the oversubscription ratios and aggregations points. The bandwidth between the two hosts depends on the placements. This results in a design constraint as you have to know in advance where to deploy servers and services. You do not have the freedom to deploy servers in any rack that has free space.

The following diagram displays the Ravello Canvas settings for the leaf and spine design. Nodes labelled “Sx” are spine nodes and “Lx” are the leaf nodes. There are also various computes node representing end hosts.

What we really need are equidistant endpoints. The placement of VM should not be a concern. Wherever you deploy a VM, it should have the same bandwidth to any other VM. Obviously, there are exceptions with servers connected to the same ToR switch. The core should also be non blocking so inbound and outbound flows are independent. We don't want an additional blocking element in the core. Networks should also provide unlimited workload placement and the ability to move VM around the data center fabric.

Datacenter architectures that are three tiered are not quite as scalable and place additional complexity for provisioning. You have to really think about where things are in the data center to give the user the best performance. This increases the costs as you have certain areas of the data center that are underutilized. Underutilized servers lose money. To build your data center as big as possible with equidistance endpoints you need to flatten the data center build and leaf and spine architecture.

I have used Ravello Network & Security Smart Lab to set up a large leaf and spine architecture based on Arista vEOS to demonstrate LISP connectivity. Ravello gives you the ability to scale to very large virtual networks, which would have difficult to do in a physical environment. Implementing a large leaf and spine architecture in a physical lab would require lots of time, rack space and power – but with Ravello, it is a matter of a few clicks.

Setting up LISP cloud on Ravello

Get it on Repo
REPO by Ravello Systems, is a library of public blueprints shared by experts in the infrastructure community.

The core setup on Ravello, consists of 4 Spine nodes. These nodes provide the connectivity between other functional block within the data center and provide the IP connectivity between end hosts. The core should forward packets as fast as possible.

The chosen fabric for this design was Layer 3 but if the need arises we can easily extend layer 2 segments with VXLAN overlay. Kindly see previous post on VXLAN to bridge Layer 2 segments. The chosen IP routing protocol is BGP and BGP neighbors are set up between spines and leaf nodes. BGP not only allows you to scale networks but it also decreases network complexity. BGP neighbors are explicitly defined and policies are configured per neighbor. Offering deterministic design. Another common protocol for this design could be OSPF, with each leaf in a stubby area. Stubby areas are used to limit route propagation.

The Leaf nodes connect hosts to the core and they are equivalent to the access layer. They are running Arista vEOS and support BGP to the spine. We are using 4 leaf nodes located in three different racks.

XI is the management JUMP host and enabled for external SSH connectivity. It is used to manage the internal nodes and its from here you can SSH to the entire network.

The following diagram displays access from XI to L5. Once on Leaf 5 we issue commands to display BGP peerings. The leaf nodes run BGP with the Spine nodes.

We also have 4 Compute nodes in three racks. These nodes simulate end hosts and they are running Ubuntu. Individual devices do not have external connectivity so in order to access via local SSH client you must first SSH to XI.

LISP Configuration

LISP is enabled with the lisp.config file which is one C1, C2, L5 and L6. The software is Python based. It can be found in the directory listed below. If you need to make changes to this file or view its contents, enter Bash mode within the Arista vEOS and view with the default text viewer.

None of the Spine nodes are running the LISP software and they transport IP packets with traditional means i.e they do not encapsulate packets in UDP and carry out any LISP functions. Leaf nodes L5 and L6 perform LISP XTR functions and carry out the encapsulation and decapsulation.

The diagram below displays the output from a tcpdump while in Bash mode. ICMP packets are sent from LISP source loopbacks of C9 (5.5.5.5) to C11 (6.6.6.6). These IP addresses are permitted by the LISP process to trigger LISP encapsulation. You will need to ping this source and destination to trigger the LISP process. All other traffic flow are routed normally.

C1 & C2 are the LISP mapping servers and perform LISP control plane services. The following wireshark captures display the LISP UDP encapsulation and control plane map registers requests to 172.16.0.22.

Before you begin testing, determine that the LISP process have started on the C1, C2, L5 and L6 with the command ps -ef | grep lisp. If it does not respond with 4 files, restart the LISP process with the command ./RESTART-LISP.

Conclusion

LISP in conjunction with a Leaf-Spine topology helps architect efficient & scalable data-centers. Interested in trying out the LISP Leaf-Spine topology mentioned in this blog? Just open a Ravello account and add this blueprint to your library.

I would like to thank Jody Scott and Dino Farinacci for collaborating with me to build this blueprint.

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.