Tuesday May 26, 2009

Crossbow for Cloud Computing Architectures

The first phase of Crossbow was integrated in OpenSolaris last December. I recently posted a paper which shows how Crossbow technology such as virtual NICs (VNICs), virtual switches, Virtual Wires (vWires), and Virtual Network Machines (VNMs) can be used as a foundation to build isolated virtual networks for cloud computing architectures. The document is available on opensolaris.org. Please share your comments on the Crossbow mailing list at crossbow-discuss@opensolaris.org.

Thursday Feb 14, 2008

Private virtual networks for Solaris xVM and Zones with Crossbow

Virtualization is great: save money, save lab space, and save the planet. So far so good! But how do you connect these virtual machines, allocate them their share of the bandwidth, and how do they talk to the rest of the physical world? This is where the OpenSolaris Project Crossbow comes in. Today we are releasing a new pre-release snapshot of Crossbow, an exciting OpenSolaris project which enables network virtualization in Solaris, network bandwidth partitioning, and improved scalability of network traffic processing.

This new release of the project includes a new features which allows you to build complete virtual networks that are isolated from the physical network. Virtual machines and Zones can be connected to these virtual networks, and isolated from the rest of the physical network through firewall/NAT, etc. This is useful when you want to prototype a distributed application before deploying it on a physical network, or if you want to isolate and hide your virtual network.

This article shows how Crossbow can be used together with NAT to build a complete virtual network connecting multiple Zones within a Solaris host. The same technique applies to xVM Server x64 as well, since xVM uses Crossbow for its network virtualization needs. A detailed description of the Crossbow virtualization architecture can be found in my document here.

In this example, we will build the following network:

First we need to build our virtual network, this can be done very simply using Crossbow using etherstubs. An etherstub is a pseudo ethernet NIC which can be created with dladm(1M). VNICs can then be created on top of that etherstub. The Crossbow MAC layer of the stack will implicitly create a virtual switch between all the VNICs sharing the same etherstub. In the following example we create an etherstub and three VNICs for our virtual network.


# dladm create-etherstub etherstub0
# dladm create-vnic -d etherstub0 vnic0
# dladm create-vnic -d etherstub0 vnic1
# dladm create-vnic -d etherstub0 vnic2

By default Crossbow will assign a random MAC address to the VNICs, as we can see from the following command:


# dladm show-vnic
LINK OVER SPEED MACADDRESS MACADDRTYPE
vnic0 etherstub0 0 Mbps 2:8:20:e7:1:6f random
vnic1 etherstub0 0 Mbps 2:8:20:53:b4:9 random
vnic2 etherstub0 0 Mbps 2:8:20:47:b:9c random

You could also assign a bandwidth limit to each VNIC by setting the maxbw property during VNIC creation. At this point we are done creating our virtual network. In the case of xVM, you would specify "etherstub0" instead of a physical NIC to connect the xVM domain to the virtual network. This would cause xVM to automatically create a VNIC on top of etherstub0 when booting the virtual machine. xVM configuration is described in the xVM configuration guide.

Now that we have our VNICs we can create our Zones. Zone test1 can be created as follows:


# zonecfg -z test1
test1: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:test1> create
zonecfg:test1> set zonepath=/export/test1
zonecfg:test1> set ip-type=exclusive
zonecfg:test1> add inherit-pkg-dir
zonecfg:test1:inherit-pkg-dir> set dir=/opt
zonecfg:test1:inherit-pkg-dir> end
zonecfg:test1> add net
zonecfg:test1:net> set physical=vnic1
zonecfg:test1:net> end
zonecfg:test1> exit

Note that in this case the zone is assigned its own IP instance ("set ip-type=exclusive"). This allows the zone to configure its own VNIC which is connected to our virtual network. Now it's time to setup NAT between our external network and our internal virtual network. We'll be setting up NAT with IP Filter, which is part of OpenSolaris, based on the excellent NAT write up by Rich Teer.

In our example the global zone will be used to interface our private virtual network with the physical network. The global zone connects to the physical network via eri0, and to the virtual private network via vnic0, as shown by the figure above. The eri0 interface eri0 is configured the usual way, and in our case its address is assigned using DHCP:


# ifconfig eri0
eri0: flags=201000843 mtu 1500 index 2
inet 192.168.84.24 netmask ffffff00 broadcast 192.168.84.255
ether 0:3:ba:94:65:f8

We will assign a static IP address to vnic0 in the global zone:


# ifconfig vnic0 plumb
# ifconfig vnic0 inet 192.168.0.1 up
# ifconfig vnic0
vnic0: flags=201100843 mtu 9000 index 6
inet 192.168.0.1 netmask ffffff00 broadcast 192.168.0.255
ether 2:8:20:e7:1:6f

Note that the usual configuration variables (e.g. /etc/hostname.) must be populated for the configuration to persist across reboots). We must also enable IPv4 forwarding on the global zone. Run routeadm(1M) to display the current configuration, and if "IPv4 forwarding" is disabled, enable it with the following command:


# routeadm -u -e ipv4-forwarding

Then we can enable NAT on the eri0 interface. We're using a simple NAT configuration in /etc/ipf/ipnat.conf:


# cat /etc/ipf/ipnat.conf
map eri0 192.168.0.0/24 -> 0/32 portmap tcp/udp auto
map eri0 192.168.0.0/24 -> 0/32

We also need to enable IP filtering on our physical network-facing NIC eri0. We run "ipnat -l" to verify that our NAT rules have been enabled.


# svcadm enable network/ipfilter
# ipnat -l
List of active MAP/Redirect filters:
map eri0 192.168.0.0/24 -> 0.0.0.0/32 portmap tcp/udp auto
map eri0 192.168.0.0/24 -> 0.0.0.0/32

Now we can boot our zones:


# zoneadm -z test1 boot
# zoneadm -z test2 boot

Here I assigned the address 192.168.0.100 to the vnic1 assigned to zone test1:


# zlogin test1
[Connected to zone 'test1' pts/2]
...
# ifconfig vnic1
vnic1: flags=201000863 mtu 9000 index 2
inet 192.168.0.100 netmask ffffff00 broadcast 192.168.0.255
ether 2:8:20:53:b4:9
# netstat -nr

Routing Table: IPv4
Destination Gateway Flags Ref Use Interface
-------------------- -------------------- ----- ----- ---------- ---------
default 192.168.0.1 UG 1 0
default 192.168.0.1 UG 1 0 vnic1
192.168.0.0 192.168.0.100 U 1 0 vnic1
127.0.0.1 127.0.0.1 UH 1 2 lo0

Routing Table: IPv6
Destination/Mask Gateway Flags Ref Use If
--------------------------- --------------------------- ----- --- ------- -----
::1 ::1 UH 1 0 lo0

Note that the zone appears to be on a network and has what looks like a regular NIC with a regular MAC address. In reality, this zone is connected to a virtual network isolated from the physical network. From that non-global zone, we can now reach out to the physical network via NAT running in the global zone:


# ssh someuser@129.146.17.55
Password:
Last login: Tue Feb 12 13:35:03 2008 from somehost
...

From the global zone, we can query NAT to see the translations taking place:


# ipnat -l
List of active MAP/Redirect filters:
map eri0 192.168.0.0/24 -> 0.0.0.0/32 portmap tcp/udp auto
map eri0 192.168.0.0/24 -> 0.0.0.0/32

List of active sessions:
MAP 192.168.0.100 37153 <- -> 192.168.84.24 26333 [129.146.17.55 22]

Of course this is only the tip of the iceberg. You could deploy NAT from a non-global zone itself, or deploy a virtual router on your virtual network, you could enable additional filtering rules, etc, etc. Of course you are not limited to only one virtual network. You can create multiple virtual networks within a host, route between these networks, etc. We are exploring some of the possibilities as part of the Crossbow and Virtual Network Machines projects.

Monday Apr 02, 2007

Virtual Switching in Solaris with Crossbow VNICs

Virtual NICs, also known as VNICs, are a core components of project Crossbow. They allow physical NICs to be shared by multiple Zones or virtual machines such as Xen domains. VNICs appear to the rest of the system as regular NICs. VNICs can be assigned a subset of the hardware resources (interrupts, rings, etc) made available by the underlying hardware.

In order to provide connectivity between the multiple Zones or virtual machines sharing a single physical NIC, the VNIC layer also provides a data-path between the VNICs defined on top of the same underlying NIC. The VNICs sharing the same underlying NIC appear to be part of the same segment, i.e. connected to a same virtual switch. The virtual switch concept also allow fully virtual networks to be be built within a machine.

A couple of days ago I posted a first draft design document describing the concept of virtual switches, how they are implemented by VNICs in Solaris, and how they can be used in practice.

About

ndroux

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today