This article details NFV orchestration using public cloud NFVI as a 4 part series. This post in the series looks into how service orchestration using Juniper Contrail can help assist with multi-tenancy and workload mobility, and also increase service velocity through NFV orchestration and service chaining.
Contrail uses both an SDN controller and vRouter instance to influence VNF. Juniper view their entire platform as a cloud network automation solution, providing you a virtual network. Virtual networks reside on top of physically interconnect devices, with a lot of the intelligence getting pushed to the edge of the network with tunnel creation. The role of the network is now closer to user groups. This is the essence of cloud enabling multitenancy that couldn't be done properly with VLANs and other traditional networking mechanisms.
Contrail exposes API to orchestration platforms to receive service creation commands and provision new network services. From these commands it spins up virtual machines of the required network functions, for example NAT or IPS.
Juniper employs standardised protocols, BGP and MPLS/VPN. Extremely robust and mature implementations. Why reinvent the wheel when we are proven technologies that work?
Supporting both virtual and physical resources, Contrail also leverage Open Source cloud orchestration platform OpenStack and becomes a plugin for the Neutron project of OpenStack. Openstack and Contrail are fully integrated, when you create a network in Contrail it shows up in OpenStack. Contrail has also extended to use the OpenStack Heat infrastructure to deploy the networking constructs.
Juniper's Contrail allows you to consume the network in an abstract way. What this means is that you can specify the networking requirement in a simple manner to the orchestrator. The abstract definitions are then handed to the controller. The controller acts as a compiler and takes these abstract definitions and converts them into low level constructs. Low level constructs could include a routing instance or ACL that are required to implement the topology that you specify in the orchestration systems.
The vRouter carries out distributed forwarding plane and is installed in a hypervisor as a kernel module of every x86 compute nodes. It extends the IP networks into a software layer. The reason the vRouter is installed in all x86 compute nodes is because VM’s can be spun up in any x86 compute node. So if VM-A gets spun up on compute A, we have the forwarding capability on that node. It does not augment the Linux Bridge or the OVS, it is a complete replacement.
The intelligence of the network is now with the controller that programs the local vRouter kernel modules. The fabric of the network, be it a leaf and spine or some other physical architecture only needs to provide end-to-end IP connectivity between the endpoints. It doesn't need to carry out any intelligence, policies or service decision making. All that is taking care of by the contrail controller that pushes rules down to the vRouters sitting at the edge of the network.
Now that the service is virtualization it can be easily scaled with additional VM’s as traffic volume grows. Elastic and dynamic networks have the capability to offer on-demand network services. For example, you have an requirement to restrict access to certain sites during working hours. NFV enables you to order a firewall service via a service catalogue. The firewall gets spun up and properly service chained between the networks. Once you no longer require the firewall service, it is deactivated immediately and the firewall instance is spun down. Any resources utilized by the firewall instance are released. The entire process enables elastic and on demand service insertion.
Service insertion is no longer correlated to physical appliance deployment, which in the past severely restricted product and service innovations. Service provider can try out new services on demand, increasing the time to market.
For example, we have an application stack with multiple tiers. The front end implements web functionality, middle tier implements caching functionality, and a backend that serves as the database tier. You require 3 networks for the tiers and VM in each of these implement this functionality. You attach a simple type of security policy, so only HTTP between front end to caching is permitted and before sending to the database tier it get scrubbed to the virtual firewall.
This requirement is entered into the orchestrations system and in terms of the compute orchestration system (Openstack - Nova) the VM’s (matched per tier) are launch on the corresponding x86 compute modes. For VM’s that are in the same network but on different hosts, the network is extended by the vRouters by establishing a VPN tunnel. A mesh of tunnels can be created to whatever hosts are needed. The vRouter creates a routing instance on each host for each network and enforces the security policies. All the security policies are implemented by the local vRouters sitting in the kernel.
Security policies assigned to tenants are no longer made in your physical network, no need for the box by box mentality, it is contained in the vRouter, which is distributed throughout the network.
Nothing in the physical environment needs to be changed. The controller programs the flows. For example, if VM-A goes to VM-B send the packet to the virtual load balancing device and then to the virtual firewall. All routers are then programed to look for this match and if it is met, it will send to the correct next hop for additional servicing. This is the essence of contrail service chaining. The ability to specify a list of ordered segments you would like the traffic to take. The controller and the vRouter take care of making sure the stream of traffic follows the appropriate chain. For example, for HTTP traffic send it through a Firewall and a Load Balancer but for telnet traffic just send to the Firewall.