VMware NSX is a network and security virtualization solution that allows you to build overlay networks. The decoupling / virtualization of networking services from physical assets displays the real advantages of NSX. Network virtualization with NSX offers the same API driven, automated and flexible approach much along the lines of what compute virtualization has done for compute. It enables changing hardware without having to worry about your workload networking which is preserved thanks to being decoupled from the hardware. There are also great benefits from decoupling security policy from its assets, abstracting security policy. All these interesting abstractions are possible as we are on the hypervisor and can see into the VM. NSX provides the overlay and not the underlay. The physical underlay should be a leaf and spine design, limited to one or two ToR switches. Many implement just two ToR switches. Depending on port count density you might only need one ToR. Each ToR has a connection (or two) to each spine offering a high available design. Layer 2 designs should be limited as much as possible so to minimise the size of broadcast domains. The broadcast domain should be kept to small isolated islands as to minimise the blast radius should a fault occur. As a general design rule, Layer 2 should be used for what it was design for - between two hosts. Layer 3 routing protocols on the underlay should be used as much as possible. Layer 3 uses a TTL that is not present in Layer 2. The TTL field is used to prevent loops.
The hypervisor, referenced to as the virtual machine manager, is a device / program that enables multiple operating systems to share a single host. Hypervisors are a leap forward in fully utilizing server hardware as a single operating system per host would never fully utilise all physical hardware resources. It is here we have hypervisor hosts. Soft switches run in the hypervisor hosts and they implement Layer 2 networking over Layer 3 using the IP transport in the middle to exchange data. VMware’s NSX allows you to implement virtual segments in the soft switches and as discussed use MAC over IP. To support remote Layer 2 islands there is no need to stretch VLANs and connect broadcast and failure domains together. VMware NSX supports complicated application stacks in cloud environments. It has many features including Layer 2 and Layer 3 segments, distributed VM NIC firewalls, distributed routing, load balancing, NAT, and Layer 2 and Layer 3 gateway to connect to the physical world. NSX uses a proper control plane to distribute the forwarding information to soft switches. The NSX cluster controller configures the soft switches located in the hypervisor hosts. The controller will have at a min of 3 nodes with a max of 5 for redundancy. To form the overlay (on top of the underlay) between tunnel endpoints, NSX uses VXLAN. VXLAN has now become the defacto for overlay creation. There are three modes available - multicast, new unicast modes, hybrid modes. Hybrid modes use multicast locally and does not rely on the transport network for multicast support. This offers huge benefits as many operational teams would not like to implement multicast on core nodes. Multicast is complex. The core should be as simple as possible, concerned only with forwarding packets from A to B. MPLS networks operate this way and they scale to support millions of routes. VMware NSX operates with Distributed Routers. It looks like all switches are part of the same router, meaning all switches have same IP and all listen to MAC addresses associated with that IP. The distributed approach creates one large device. All switches receive packet sent to the gateway and do Layer 3 forwarding. One of the most powerful features of NSX is the VM NIC firewalls. The firewalls are In-kernel firewall and no traffic goes into userworld. One drawback of the physical world is that physical firewalls are a network choke point, they also cannot be moved to easily. Networks today need to be agile and flexible and distributed firewalls fit that requirement. They are fully stateful and support IPv4 and IPv6.
The Nexus 1000v Series is a software-based NX-OS switch that add capabilities to vSphere 6 (and below) environments. The Nexus 1000v may be incorporated with other Cisco products, such as the VSG and vASA to offer a complete network and security solution. As many organisations move to the cloud they need intelligent and advanced network functions with a CLI that they know. The Nexus 1000v architecture is divided into two main components - a) Virtual Ethernet Module (VEM) and b) Virtual Supervisor Module (VSM). These components are logically positioned differently in the network. The VEM is inside the hypervisor and executes as part of the ESXi kernel. Each VEM learns individually and in turn builds and maintains its own MAC address table. The VSM is used to manage the VEM’s. The VSM can be designed in the high available design (2 for redundancy) and control communication between the VEM and the VSM can now be Layer 3. When the communication was Layer 2, it required a packet and control VLAN configuration. The Nexus 1000v can to be viewed as a distributed device. The VSM control multiples VEMs as one logical device. The VEM do not need to be configured independently. All the configuration is performed on the VSM and automatically pushed down to the VEM that sit in the ESXi kernel. The entire solution is integrated into VMware vCenter. This offer a single point of configuration for the Nexus switches and all the VMware elements. The entire virtualization configuration is performed with the vSphere client software, including the network configuration of the Nexus 1000v switches.
One major configuration feature of the Nexus 1000v is the use of port profiles. Port profiles are configured from the VSM and define the different network policies for the VM. They are used to configure interface settings on the VEM. When there is a change to a port profile setting, the change is automatically propagated to the interfaces that belong to that port profile. The interfaces may be connected to a number of VEM, dispersed around the network. There is no need to configure on an individual NIC basis. In vCenter a port profile is represented as a port group. They are then applied to individual VM NIC through the vCenter GUI. Port Profiles are dynamic in nature and move when the VM is moved. All policies defined with port profiles follow the VM throughout the network. In addition to moving policies the VM also retains network state.