X

Move your VMware and KVM applications to the cloud without making any changes

How to run VMware NSX and Cisco Nexus 1000v on AWS & Google Cloud

Author: Matt Conran
 
Matt Conran is a Network Architect based out of Ireland and a prolific blogger at Network Insight. In his spare time he writes on topics ranging from SDN, OpenFlow, NFV, OpenStack, Cloud, Automation and Programming.

Network and data-center architects are evaluating network virtualization solutions to bring workload agility to their data-centers. This article (part 3 of a 3 part series) details how to setup fully-functional VMware NSX and Cisco Nexus 1000v deployment on Ravello to evaluate each of the solutions. Part 1 compares the architectural components of Cisco Nexus 1000v and VMware NSX, and Part 2 looks into the capabilities supported by of each.

Setting up Nexus 1000v and vSphere 6.0 on public cloud

In this section we will walk through setting up VMware vSphere 6.0 environment with the addition of Cisco’s Nexus 1000v on AWS & Google cloud using Ravello, and save a ‘blueprint template’ of the setup for a one-click deployment. VMware vSphere is a virtualization platform that by default comes with a standard and distributed virtual switch (DVS). The Nexus 1000v is a Cisco product integrated into vCenter for additional functionality. Similar to VMware VDS, it follows a distributed architecture and is Cisco implementation of the distributed virtual switch. The distributed virtual switch is a generic term among all vendors. The Nexus 1000v is a distributed platform that uses a VEM module for data plane and VSM module for control plane. The VEM operates inside the VMware ESXi hypervisors.

The setup consists of a number of elements. The vCenter server is running on Windows 2012 server (trial edition) not as an appliance and acts as the administration point for the virtualized domain. Two ESXi hosts are installed with test Linux VM’s. Later, we will install the Nexus 1000v modules, both VEM and VSM, on one of the ESXi hosts. The vSphere client version 6 is also installed on the Windows 2012 server. The ESXi host have default configurations including the standard vswitch and port groups. The architecture below is built to a working blueprint enabling you to go on a build a variety of topologies and services. The Nexus 1000v standard is installed on a Enterprise plus license vSphere environment. One requirements for Nexus 1000v deployment is an enterprise plus licence. We currently have two ESXi hosts and one vCenter. A flat network of 10.0.0.0/16 is used as we have IP connectiity between all hosts. We install the Nexus version (Nexus 1000v.5.2.1.SV3.1.5a-pkg.zip), which is compatible with vSphere 6.0. It can be downloaded from the Cisco Website for free with your Cisco CCO. There are two versions of Nexus 1000v available - Standard and Enterprise Plus. Enterprise has additional features requiring a license. The standard addition has a slightly reduced feature set but its free to download. This blueprint uses the standard edition.

Nexus 1000v Installation

Once downloaded you can deploy the OVA within vcenter. There are a number of steps you have to go though such as setting the VSM domain IP and management address etc.

Once finished you should be able to see the N1K deployed as a VM in your inventory. Power it on and SSH to the management IP address. The N1V has the concept of control and packet VLANs. It is possible to use VLAN 1 for both. For production environments, it is recommended to separate these. This blueprint uses Layer 3 so we don't need to do this. Next, we must register Nexus 1000v with vCenter by downloading the Nexus 1000v extensions and entering to vCenter. Proceed to go to the WEB GUI of the VSM and right click the extension type. Once complete you can import the extension as a plugin to centre.

Now we are ready to log back into the VSM and configure it to connect to the vcenter. Once this is done you vCenter you will see the Distributed Switch created under Home > Inventory > Networking. Next, we install the VEM (Virtual Ethernet Module) on the ESXi host and connect the host to the N1K VSM. Once the VEM is installed you can check it status and make sure it's connected to the vCenter. The following screen shows the VSM connected to vCenter.

The following screen shows the VEM correctly installed. This steps needs to be carried out on all ESXi host that require the VEM module. Once installed the VEM gets its configuration from the VSM.

Now, you are ready to build a topology by adding host to your Nexus 1000v. For example, install the VEM in the other ESXi host and add an additional VSM for high availability. With Ravello this is easy to do, simply save the ESXi host to the library and add into the setup. Remember to change DNS and IP settings on the new ESXi host. Once this deployment is created, you can click “Save as Blueprint” to have the entire topology complete with VMs, configuration and networking and storage interconnect saved into your Blueprint library that can be used to run multiple clones of this deployment with one click. 

Setting up VMware NSX on public cloud

VMware NSX is a network and security virtualization platform. The entire concept of virtualization involves the decoupling of the control and data plane, offering a API to configure the network services form a central point. NSX abstracts the underlying physical network and introduces a software overlay model that rides on top of the physical network. The decoupling permits complex network services to be deployed in seconds. The diagram below displays the NSX blueprint created on Ravello. Its design is based around separation into clusters, for management and data plane reasons.

The following are summary of prerequisites required for NSX deployments:

  • The standard vsphere client cannot be used to manage NSX. For this reason, a NSX a vSphere web client is used.
  • A vCenter Server (version 5.5 or later) with at least 2 cluster. For multi-vCenter deployments you will require vCenter version 6.0.
  • NTP and DNS.
  • Deploy distributed virtual switches instead of standard virtual switch. The VDS perform the foundation of the overlay VXLAN segments.
The following ports are required
  • TCP Port 80 and 443 for vsphere communication and NSX REST API
  • TCP Ports 1234, 5671 and 22 for host to controller cluster communication, RabbitMQ message bus and SSH access.

NSX manager and its components require considerable amount of resources. Pre-install checks should check for CPU, memory and disk space required for NSX Manager, NSX Controller and NSX Edge requirements. The NSX deployment consists of a number of elements. The two core components are a NSX manager and a NSX controller. The NSX manager is an appliance that can be downloaded in OVA format from VMware's website. The recommended approach is to deploy the NSX manager on a separate management cluster, separate from the compute cluster. The separation allows the decoupling of management, data, and control plane. All configurations are carried out in the “Networking & Security tab”. The diagram below displays the logical switches and the Transport Zone they represent. ESXi hosts that can communicate with each other are said to be in the same transport zone. Transport zones control the domains of logical switches, which enables a logical switch to extend across distributed switches. Therefore, any ESXi host that is a member of that transport zone may have multiple VM’s part of that network.

The management cluster below runs the vCenter server and the NSX controllers are deployment in the compute clusters. Each NSX manager should be connect to only one vCenter.The NSX Manager interface has a summary tab from a GUI and also from the Webclient. You may also SSH to it IP address. The diagram show the version of NSX manager with ARP and PING tests.

The next component is the NSX control plane that consists of controller nodes. There should be a minimum of three controller virtual machines. 3 controllers are used for high availability and all of theme are active at any given time. The deployment of controllers is done via the Network & Security | Installation and Management Tab. From here click on the + symbol to add a Controller.

For data plane components must be installed on a per-cluster basis. This involves preparing the ESXi host for data plane activity. This will enable the distributed firewall service on all host in the cluster. Any new hypervisors installed and added to the cluster will get automatically provisioned.

Just as Cisco Nexus 1000v, once this NSX deployment is created, you can click “Save as Blueprint” to have the entire topology complete with VMs, configuration and networking and storage interconnect saved into your Blueprint library that can be used to run multiple clones of this deployment with one click. The current NSX blueprint is already pretty big with multiple clusters but can also be easily expanded, similarly to how the vSphere 6.0 and Cisco Nexus 1K blueprint. Nodes can be added by saving the item to the library and inserting to the blueprint. There are many features to test with this blueprint including logical switching, firewalling, routing on edge service gateways, SSL and IPSEC VPN, data security and flow monitoring. With additional licences you can expand this blueprint to use third party appliances, such as Palo Alto.

The vSphere 6.0 and Cisco Nexus 1000v deployment can be easily expanded to a much larger scale. Additional ESXi hosts can be added by saving the item to the library and inserting to the blueprint. With this type of flexibility we can easily scale the blueprint and design multiple VEM and VSM. A fully distributed design will have multiple VEM’s installed. With additional licences you can insert other Cisco appliances that work with the Nexus 1000v. This may include the VSG or the vASA, allowing you to test service chaining and other advanced security features. If you are interested in trying out this blueprint, or creating your VMware NSX or Cisco Nexus 1000v deployment from scratch, just open a Ravello trial account and send a note. You will be on your way to play with this fully functional VMware NSX or Cisco Nexus 1000v deployment within minutes, or build your very own deployment using Ravello’s Networking Smart Labs.

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.Captcha

Recent Content

Oracle

Integrated Cloud Applications & Platform Services