X

Move your VMware and KVM applications to the cloud without making any changes

  • February 9, 2016

NFV Orchestration: Setup NFV Orchestration on AWS and Google Cloud (part 4 of 4 post series)

Authors:
Jakub Pavlik
Jakub Pavlik and Ondrej Smola are engineers at tcpcloud – a leading private cloud builder.
Matt Conran
Matt Conran is an independent network architect and consultant, and blogs at network-insight.net

This article details NFV orchestration using public cloud NFVI as a 4 part series. This post details setting up a fully functioning NFV orchestration with firewalling and load balancing services chaining, and comes with a fully-functional NFV service chaining topology with Juniper Contrail service chaining firewall and load-balancer services in a topology that you can access on Ravello and try out.

The NFV topology in this Ravello blueprint presents firewalling and load balancing Virtual Network Functions (VNF’s). There are prepared 3 use case scenarios showing FWaaS and LbaaS launched by OpenStack Heat template:

  • PFSense - free Open Source FreeBSD based firewall, router, unified threat management, load balancing, multi WAN, Linux.
  • FortiGate-VM - is a full-featured FortiGate packaged as a virtual appliance. FortiGate-VM virtual appliance is ideal for monitoring and enforce virtual traffic on leading virtualization, cloud and SDN platforms, including VMware vSphere, Hyper-V, Xen, KVM, and Amazon Web Services (AWS).
  • Neutron Agent-HAproxy - free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications.

Architecture Components

The following diagram shows logical architecture of this blueprint. OpenStack together with OpenContrail provides NFV infrastructure. The virtual resources are orchestrated through Heat and then different tools are used for VNFs management.

Components

  • NFV - Service Chaining in OpenContrail through VMs by OpenStack
  • VNF - orchestrator for VMs containing FwaaS (FortiGate, PFSense), LbaaS (Neutron plugin HAProxy)

The NFV topology consist of 5 nodes. The management node is used for public IP access and is accessible via SSH. It is also used as a JUMP host to connect to all other nodes in the blueprint. The controller node is the brains of the operation and is where Openstack and OpenContrail are installed. Finally, we have three compute nodes named Compute 1, Compute 2 and Compute 3 with Nova Compute and the Opencontrail vRouter agent installed. This is where the data plane forwarding will be carried out.

The diagram below display the 5 components used in the topology. All nodes apart from the management node have 8 CPU, 16GB of RAM and 64GB of total storage. The management node has 4 CPU, 4GB of RAM and 32GB of total storage.

The intelligence runs in the Controller who has a central view of the network. It provides route reflectors for Opencontrail vRouter agents and configure them to initiates tunnels for end point connectivity. OpenContrail transport is based on well known protocols of MPLSoverUDP, MPLSoverGRE or VXLAN. The SDN controller can program the correct next hop information to direct traffic to a variety of devices by playing with labels and next hop information.

Previous methods for services chaining include VLANs and PBR, which are cumbersome to manage and troubleshoot. Traditional methods may require some kind of tunneling if you are service chaining over multiple Layer 3 hops. The only way to provide good service chaining capabilities is with a central SDN controller. The idea of having a central viewpoint of the network is proving to be a valuable use case for SDN networks.

Internal communication between nodes is done over the 10.0.0.0/24 network. Every node has one NIC on the 10.0.0.0/24 network and the Management and Controller nodes have an additional NIC for external connectivity.

Installation of OpenStack with Open Contrail

For the installation of Juniper Contrail we used the official Juniper Contrail Getting Started Guide.

The name and version of package contrail-install-packages_2.21-102~ubuntu-14-04juno_all.deb This will install both OpenStack and OpenContrail.

From the diagram below you can see that the virtual network has 5 instances, 9 interfaces and 4 VN’s for testing. The OpenContrail dashboard is the first place to view a summary of the virtual network.

Login information for every node:
User: root
Password: openstack

also

User: ubuntu
Password: ravelloCloud

Login to openstack and opencontrail dashboards:
User: admin
Password: secret123

Openstack dashboard url depend on ravello public ip for controller node but is always x.x.x.x/horizon. For example:
http://controller-nfvblueprint-eaxd3p7s.srv.ravcloud.com/horizon/

OpenContrail dashboard is on same url address but on port 8143. For example:
https://controller-nfvblueprint-eaxd3p7s.srv.ravcloud.com:8143/login

NOTE: For properly working vnc_console in openstack you should change line “novncproxy_base_url” on every compute node in /etc/nova/nova.conf to your url of controller.

Example:
novncproxy_base_url = http://controller-nfvblueprint-eaxd3p7s.srv.ravcloud.com:5999/vnc_auto.html

The two services we will be testing are load balancing and firewalling service chaining. Load balancing will be created on the LbaaS agent and firewalling will be based on Fortigate and PFSense.

Within OpenStack we create one external network called “INET2”, which can be accessed from the outside (Management and Compute nodes in ravello).

The “INET2” network has a floating IP pool of 172.0.0.0/24. The pool is used to simulate public networks. The simple gateway for this network is on Compute2.

All virtual instances in openstack can be accessed from OpenStack dashboard. Through console in instance detail.

OpenStack Heat Templates

Heat is the main project of the OpenStack orchestration program. It allows users to describe deployments of complex cloud applications in text files called templates. These templates are then parsed and executed by the Heat engine.

OpenStack Heat Templates are used to demonstrate load balancing and firewalling inside of Openstack.

The location of these templates is on the Controller node in the /root/heat/ directory. Every template has two parts - an Environment with specific variables and Template. They are located in:

/root/heat/env/
/root/heat/template/

We have 3 heat templates to demonstration the NFV functions.

  • lbaaS
  • pfsense firewall - opensource firewall
  • fortigate vm firewall - 15 day trial version

You can choose from two main use case scenarios:

LbaaS Use Case Scenario

To create the heat stack with the LbaaS function use the command below:

heat stack-create -f heat/templates/lbaas_template.hot -e heat/env/lbaas_env.env lbaas

This command will create 2 web servers and lbaas service instances.

The load balancer is configured with VIP and floating IP which can be accessed from "public" (Management and Compute nodes in Ravello)

Firewalls (FwaaS) Use Case Scenarios

To create the heat stack for the pfsense function use the command below:

heat stack-create -f heat/templates/fwaas_mnmg_template.hot -e heat/env/fwaas_pfsense_env.env pfsense

To create the heat stack for the fortigate function use the command below:

heat stack-create -f heat/templates/fwaas_mnmg_template.hot -e heat/env/fwaas_fortios_contrail.env fostios

This will create service instance and one ubuntu instance for testing.

Description of Load balancing Use Case

The Heat templates used for the load balancer profile will create a number of elements including the pool, members and health monitoring. It instructs OpenContrail to create service instances for load balancing. This is done through Openstack Neutron LBaaS API.

More information can be found here.

The diagram below displays the load balancer pools, members and the monitoring type:

The load balancing pool is created on a private subnet 10.10.10.0/24. A VIP is assigned, which is allocated from the virtual network named public network. The subnet for this network is 10.10.20.0/24.

The load balancer consists of 2 ports to the private network and 1 port to public network. There is also floating IP assigned to VIP that is used for reachability from outside of OpenStack/OpenContrail.

The diagram below summarises the network topology for the virtual network:

For testing purposes the load balancing heat templates create 2 web instances in the private network. There is also a router connected to this private network. This is because after boot the web instances will attempt to download, install and configure apache2 web service.

The diagram below displays the various parameters with the created instances:

Accessing the web server's VIP address initiates classic round robin load balancing.

NOTE: Sometimes web instances does not install or configure apache2. This because of virtual simple gateway was not automatically created on compute2. In this case just create this gateway manually from python command located in /usr/local/sbin/startgw.sh on compute2. After that you can delete heat stack with lbaas and create it again or just set up apache2 manually.

CURL is used to transfer data and test the load balancing feature. The diagram below displays running command line CURL to the VIP address and a round robin results of instance 1 and 2.

Description of FWaaS/NAT

Description of FWaaS/NAT

We have prepared one heat templates for firewall service instance with NAT and two heat environments for this template. One for pfsense firewall and second for fortigate firewall.

PfSense
Information about this firewall can be found here.

Login information
User: admin
Password: pfsense

Fortigate
Information about this firewall can be found here.

Login information
User: admin
Password: fortigate

NOTE: Compute2 has to have default gateway for testing. Viz. Lbaas.

Fortigate provisioning

This action must be taken after Fortigate VM is successfully deployed by Heat Template. Openstack is running instance MNMG01. This instance is used for configuration of Fortigate service instance.

The configuration can be done with two python scripts.

fortios_intf.py
fortios_nat.py

fortios_intf.py - this script will configure the interfaces for the firewall
fortios_nat.py - this script will configure the firewall NAT rules

Running scripts:
python fortios_intf.py
python fortios_nat.py

NOTE: Configuration information are stored in .txt files.
fortios_intf.txt
fortios_nat.txt

Network Topology

The firewall service instance is connected into 3 networks. It has INET2 as external network, private_net for testing instances and svc-vn-mgmt for management instance. The topology is same for both examples (pfsense and fortigate). In private_net is one virtual instance for testing connectivity to external network.

For successful service chaining heat will also create policy in contrail and assign it to networks. Contrail is used to orchestrate the service chaining.

Configuration and testing pfsense

By default, pfsense firewall is configured to NAT after the heat stack is started. As a result, there is no need to make any configuration for this function. Pfsense image was preconfigured with DHCP services on every interface and there is outbound policy for NAT.

After we start the heat with pfsense there is already functional service chaining. Testing instance has default gateway to contrail and contrail redirects it to pfsense.

There is also NAT session in pfsense. In shell run command:

pfctl -s state

Configuration and testing fortigate

Fortigate can be configured from the management instance. This instance has floating ip 172.0.0.5 and login is root and password openstack or it can also be accessed through vnc console from openstack dashboard. In this instance are 2 python scripts. One of the python scripts is for the configuration of interfaces (fortios_nat.py) and second is for configuration of firewall policy NAT (fortios_intf.py).

NOTE: If fortigate firewall has different ip that 10.250.1.252 than it has to be change information in /root/.ssh/config.

python fortios_nat.py

python fortios_intf.py

After running these two scripts, testing instance has connectivity to external network.

Interested in trying this setup with one click? Just open a Ravello trial account, and add this NFV blueprint to your account and you are ready to play with this NFV topology with Contrail orchestrating and service chaining load-balancer and firewall as VNFs.

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.Captcha