X

Move your VMware and KVM applications to the cloud without making any changes

  • Sunday, October 11, 2015

Install and run VMware NSX 6.2 for Sales demo, POC and training labs on AWS and Google Cloud

In this blog post, we’ll discuss the installation of NSX 6.2 for VMware vSphere on AWS or Google Cloud through the use of Ravello.

NSX allows you to virtualize your networking infrastructure, moving the logic of your routing, switching and firewalling from the hardware infrastructure to the hypervisor. Software-defined networking is an essential component of the software-defined datacenter and is most likely the most revolutionary change since the creation of VLANs.

The biggest problem with installing NSX on a normal platform is that it can be quite resource-intensive, it requires physical network components, and the initial setup can be a bit time-intensive. By provisioning NSX on Ravello, we can install once and redeploy anytime, greatly reducing the time required to deploy new testing, demo or PoC environments.

To set up your vSphere lab on AWS with Ravello, create your account here.

Setup Instructions

To set up this lab, first we start off with the following:

  • 1 vCenter 6.0U1 windows server
  • 3 clusters consisting of 2 ESXi hosts each
  • 1 NFS server

In addition to this, we’ll have to deploy the NSX Manager. This can either be deployed as a nested virtual machine or directly on Ravello. In this example, we deployed the NSX Manager as a Ravello VM by extracting the OVF from the OVA file and importing it as a virtual machine.

Of the three vSphere cluster, two will be used as compute workloads and one will be used as a collapsed management and edge cluster. While this is not strictly needed, the setup allows us to test stretching NSX Logical switches and distributed logical routers across layer 3 segments. For the installation of ESXi you can refer to how to setup ESXi hosts on AWS and Google Cloud with Ravello. In addition, your vSphere clusters should be configured with a distributed switch, since the standard vSwitch doesn’t have the features required for NSX.

Each host in the compute cluster has the following specs:

  • 2 vCPU
  • 8 GB memory
  • 3 NIC (1 Management, 1NFS, 1 vTEP, each on a separate dvSwitch)
  • 1 20GB disk for the OS installation

The hosts in the management cluster have the following specs:

  • 4 vCPU
  • 20 GB memory
  • 4 NIC (1 Management, 1NFS, 1 vTEP, 1 transit, each on a separate dvSwitch)
  • 1 20GB disk for the OS installation

The reason for the increased size on the management cluster is due to the deployment of our NSX controllers, edge services gateways and management virtual machines.

After publishing our labs and installing the the base vSphere setup (or provision virtual machines from blueprints, I have blueprints for a preinstalled ESXi and vCenter which saves quite some time) we can get started on the configuration of NSX.

The installation of the NSX Manager is actually quite simple. After deploying the virtual appliance, it will not be reachable through the web interface yet, because no IP address has been set. To resolve this, we can log in to the console with the username admin and the password default. After logging into the console, we have to run the command enable, which will ask for your enable password (also set to default) and then run setup. This will set the initial configuration allowing you to access the system through the web interface.

After configuring the manager, open a web browser and connect to https://ip-of-your-manager. After logging in, you should see the initial configuration screen:

Start off with “manage appliance settings” and confirm that all settings are correct. Of special importance is the ntp server, which is critical to the functionality of NSX and should be the same on both vCenter, ESXi and the NSX Manager.

After configuring the appliance, we can start with the vCenter registration. Either open “Manage vCenter registration” from the main screen, or from the configuration page under Components ->NSX Manager service. Start with the lookup service, which should point to your vCenter server. If you are running vCenter 6 or higher, use 443 for the port, otherwise use 7444. For the credentials, you should use an administrator account on your vCenter server.

In the vCenter server configuration, point it to the same vCenter as used for the inventory service.

In case the registration doesn’t work, wait a few minutes. The initial boot of the services can take up to 10 minutes, so the services might not have started yet. You can check this by opening “view summary” on the main page.

If the status doesn’t say connected after registration, click on the circular icon right to the status. The synchronization works automatically but we can speed up the initial synchronization by forcing it manually.

After the initial setup, log out of the vSphere web client and log in again. You should see a new icon called “networking and security”.

This gives you an environment preconfigured for NSX but without the controllers or NSX drivers actually installed in the hypervisors. This allows you to quickly provision a study or lab environment allowing people to configure NSX themselves without having to spend time on deploying appliances or recreating ESXi hosts and vCenter servers. We’ll handle the preparation of the clusters in the next chapter, so if you want to create a fully functional NSX environment and blueprint is, read on.

Cluster Preparation

First, we’ll deploy a controller. Go to “Networking and security”, open “Installation” and select the “Management” tab. At the bottom, you should see a plus icon which will deploy a controller.

Select the datacenter to deploy in, select your cluster and datastore and optionally a specific host and folder. Connect your controller to the same network as your vCenter server and NSX Manager and select an IP pool. Since we haven’t created an IP pool yet, we can do that now. Click on the green plus icon above the IP pool list and enter your network configuration. This IP pool will automatically provision static IP adresses to your controllers.

In a production environment, you should run a minimum of 3 controllers (and always an odd number), but since this a lab environment 1 controller will suffice. If you would like, you could deploy 3 controllers by repeating these steps and reusing the IP pool created earlier.

After deploying a controller, move to the “Host Preparation” tab. Click the “install” link next to your cluster, and after a few minutes the status should show “Installed”. Repeat this step for every cluster you want to configure. After the NSX drivers have been installed on your cluster hosts, click the “Configure” in the VXLAN column link for each cluster. Select the distributed vSwitch you’ve provisioned for your VTEP network and an IP pool. Since we haven’t created an IP pool for VTEP yet, we’ll create one by selecting “New IP Pool”. Create this IP pool in the same way as we previously did for the Controller network. Leave the rest of the settings default.

After a few minutes,your VTEP interfaces should have been created which you can also see in the networking configuration of the ESXi host. A new vmkernel port has been created with an IP address for the IP pool. The TCP/IP stack will also be set to “vxlan” as opposed to the default.

After configuring VXLAN on each cluster, we can move on to the VXLAN configuration. Open the “logical network preparation” tab and edit the segment ID & multicast Addresses allocation. The Segment ID configures the range of VXLAN network ID’s (also known as VNI) that NSX is allowed to use. This is mainly of importance if you run multiple VXLAN implementations in the same physical underlay. While this is not likely in a Ravello lab environment we’re still required to configure this.

The multicast addresses are mainly used when NSX is set to use multicast or hybrid mode, and it’s not required to configure this.

The last step required is to configure at least one transport zone. Open the “Transport zones” tab and click the Plus icon to create a new one. Enter a name, select “Unicast” for replication mode and select the clusters that will be part of the transport zone. If you wish to stretch logical networks or distributed logical routers across clusters, select all clusters in your datacenter for this transport zone. If you wish to restrict logical networks or distributed logical routers to specific clusters (for example, your edge network) select only the clusters that should have access to these networks.

After creating a transport zone, you should have a fully functional NSX environment and you can start creating logical switches, distributed routers, edges, distributed firewalls and use any feature available to you in NSX.

Saving your environment as a Blueprint

Once you have installed your NSX environment, save it as a Blueprint. Then, you can share it with your team members in your sales engineering organization, training group, your customers/prospects and partners. They can then, with a few clicks provision a fully functional instance of this environment on AWS or Google Cloud for their own use. You don’t need to schedulde time on your sales demo infrastructure in advance, you can customize your dmeo scenario using a base blueprint, provision as many student training labs on-demand and pay per use.

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.Captcha
Oracle

Integrated Cloud Applications & Platform Services