Monday Nov 15, 2010

Solaris 11 Express 2010.11 is available!!

Congratulations to every for getting Solaris 11 Express out! With this release come a lot of networking improvements and features, including the following:
  • Network Virtualization and Resource Management (Crossbow) with VNICs and flows
  • New IP administrative interface (ipadm)
  • IPMP rearchitecture
  • IP observability and data link statistics (dlstat)
  • Link protection when a data link is given to a guest VM
  • Layer 2 bridging
  • More device types supported by dladm, including WiFi
  • Network Automagic for automatic network selection on desktops
  • Improved socket interface for better performance
Get more information here.

More to follow in the future!

Steffen

Monday Jun 01, 2009

OpenSolaris 2009.06 Delivers Crossbow (Network Virtualization and Resource Control)

Today OpenSolaris 2009.06, the third release of OpenSolaris, is announced and available for download. Among the many features in this version is the delivery of Project Crossbow, in a fully supported distribution. This brings network virtualization, including Virtual NICs (VNICs), bandwidth control and management, flow (QoS) creation and management, virtual switches, and other features to OpenSolaris.

Network virtualization joins a number of other features already in OpenSolaris, such as vanity naming (allowing custom names for data links), snooping on loopback for better observability, a re-architected IPMP with an administrative interface, and Network Automagic (NWAM--automatic configuration of desktop networking based on available wired and wireless network services).

Congratulations to everyone who made all this possible!

Steffen PS: Regarding the fully supported, please notice the new support prices and durations!

Monday Jan 19, 2009

IPMP Re-architecture is delivered

In the process of working on some zones and IPMP testing, I ran into a little difficulty. After probing for some insight, I was reminded by Peter Memishian that the IPMP Re-Architecture (part of Project Clearview) bits were going to be in Nevada/SXCE build 107, and that I could BFU the lastest bits onto an existing Nevada install. Well!!! [For Peter's own perspective of this, see his recent blog.]

Since I was already playing with build 105 because the Crossbow features are now integrated, I decided to apply the IPMP bits to a 105 installation. [Note: The IPMP Re-architecture is expected to be in Solaris Express Community Edition (SX-CE) build 107 or so (due to be out early Feb 2009), and thus in OpenSolaris 2009.spring (I don't know what its final name will be. Early access to IPS packages for OpenSolaris 2008.11 should appear in the bi-weekly developer repository shortly after SX-CE has the feature included. There is no intention to back port the re-architecture to Solaris 10.]

I am impressed! The bits worked right away, and once I got used to the slightly different way of monitoring IPMP, I really liked what I saw.

Being accustomed to using IPMP on Solaris 10 and with Crossbow beta testing previous Nevada bits, I used the long-standing (Solaris 10 and prior) IPMP configuration style I am used to. For my testing, I am using link failure testing only, so no probe addresses are configured. [For examples of the new configuration format, see the section Using the New IPMP Configuration Style below. (15 Feb 2009)]

global# cat /etc/hostname.bge1
group shared

global# cat /etc/hostname.bge2
group shared

global# cat /etc/hostname.bge3
group shared standby
In my test case bge1 and bge2 are active interfaces, and bge3 is a standby interface.
global# ifconfig -a4
bge0: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 2
        inet 139.164.63.125 netmask ffffff00 broadcast 139.164.63.255
        ether 0:3:ba:e3:42:8b
bge1: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 3
        inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
        groupname shared
        ether 0:3:ba:e3:42:8c
bge2: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 4
        inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
        groupname shared
        ether 0:3:ba:e3:42:8d
bge3: flags=261000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,STANDBY,INACTIVE,CoS> mtu 1500 index 5
        inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
        groupname shared
        ether 0:3:ba:e3:42:8e
ipmp0: flags=8201000842<BROADCAST,RUNNING,MULTICAST,IPv4,CoS,IPMP> mtu 1500 index 6
        inet 0.0.0.0 netmask 0
        groupname shared
You will notice that all three interfaces are up and part of group shared. What is different from the old IPMP is that automatically another interface was created, with the flag IPMP. This is the interface that will be used for all the data IP addresses.

Because I used the old format for the /etc/hostname.\* files, the backward compatibility of the new IPMP automatically created the ipmp0 interface and assigned it a name. If I wish to have control over that name, I must configure IPMP slightly differently. More on that later.

The new command ipmpstat(1M) is also introduced to get enhanced information regarding the IPMP configuration.

My test is really about using zones and IPMP, so here is what things look like when I bring up three zones that are also configured the traditional way, with network definitions using the bge interfaces. [Using the new format, I would replace bge with either ipmp0 (keep in mind that 0 (zero) is set dynamically) or shared. For more details on the new format, go to Using the New IPMP Configuration Style below. (15 Feb 2009)]

global# for i in 1 2 3 \^Jdo\^J zonecfg -z shared${i} info net \^Jdone
net:
        address: 10.1.14.141/26
        physical: bge1
        defrouter: 10.1.14.129
net:
        address: 10.1.14.142/26
        physical: bge1
        defrouter: 10.1.14.129
net:
        address: 10.1.14.143/26
        physical: bge2
        defrouter: 10.1.14.129
After booting the zones, note that the zones' IP addresses are on logical interfaces on ipmp0, not the previous way of being logical interfaces on bge.
global# ifconfig -a4
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
lo0:1: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        zone shared1
        inet 127.0.0.1 netmask ff000000
lo0:2: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        zone shared2
        inet 127.0.0.1 netmask ff000000
lo0:3: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        zone shared3
        inet 127.0.0.1 netmask ff000000
bge0: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 2
        inet 139.164.63.125 netmask ffffff00 broadcast 139.164.63.255
        ether 0:3:ba:e3:42:8b
bge1: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 3
        inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
        groupname shared
        ether 0:3:ba:e3:42:8c
bge2: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 4
        inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
        groupname shared
        ether 0:3:ba:e3:42:8d
bge3: flags=261000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,STANDBY,INACTIVE,CoS> mtu 1500 index 5
        inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
        groupname shared
        ether 0:3:ba:e3:42:8e
ipmp0: flags=8201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS,IPMP> mtu 1500 index 6
        zone shared1
        inet 10.1.14.141 netmask ffffffc0 broadcast 10.1.14.191
        groupname shared
ipmp0:1: flags=8201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS,IPMP> mtu 1500 index 6
        zone shared2
        inet 10.1.14.142 netmask ffffffc0 broadcast 10.1.14.191
ipmp0:2: flags=8201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS,IPMP> mtu 1500 index 6
        zone shared3
        inet 10.1.14.143 netmask ffffffc0 broadcast 10.1.14.191
For address information, here are the pre and post boot ipmpstat outputs.
global# ipmpstat -a
ADDRESS                   STATE  GROUP       INBOUND     OUTBOUND
0.0.0.0                   down   ipmp0       --          --
global# ipmpstat -a
ADDRESS                   STATE  GROUP       INBOUND     OUTBOUND
10.1.14.143               up     ipmp0       bge1        bge2 bge1
10.1.14.142               up     ipmp0       bge2        bge2 bge1
10.1.14.141               up     ipmp0       bge1        bge2 bge1
What's really neat is that it shows which interface(s) are used for outbound traffic. A different interface will be selected for each new remote IP address. That is the level of outbound load spreading at this time.
global# ipmpstat -g
GROUP       GROUPNAME   STATE     FDT       INTERFACES
ipmp0       shared      ok        --        bge2 bge1 (bge3)
There is no group difference before or after.
global# ipmpstat -g
GROUP       GROUPNAME   STATE     FDT       INTERFACES
ipmp0       shared      ok        --        bge2 bge1 (bge3)
The FDT column lists the probe-based failure detection time, and is empty since that is disabled in this setup. bge3 is listed third and in parenthesis since that interface is not being used for data traffic at this time.
global# ipmpstat -i
INTERFACE   ACTIVE  GROUP       FLAGS     LINK      PROBE     STATE
bge3        no      ipmp0       is-----   up        disabled  ok
bge2        yes     ipmp0       -------   up        disabled  ok
bge1        yes     ipmp0       --mb---   up        disabled  ok
Also, there are no differences for interface status. In both cases bge1 is used from multicast and broadcast traffic, and bge3 is inactive and in standby mode.
global# ipmpstat -i
INTERFACE   ACTIVE  GROUP       FLAGS     LINK      PROBE     STATE
bge3        no      ipmp0       is-----   up        disabled  ok
bge2        yes     ipmp0       -------   up        disabled  ok
bge1        yes     ipmp0       --mb---   up        disabled  ok
The probe and target output is uninteresting in this setup as I don't have probe based failure detection on. I am including them for completeness.
global# ipmpstat -p
ipmpstat: probe-based failure detection is disabled

global# ipmpstat -t
INTERFACE   MODE      TESTADDR            TARGETS
bge3        disabled  --                  --
bge2        disabled  --                  --
bge1        disabled  --                  --
So lets see what happens on a link 'failure' as I turn of the switch port going to bge1.

On the console, the indication is a link failure.

Jan 15 14:49:07 global in.mpathd[210]: The link has gone down on bge1
Jan 15 14:49:07 global in.mpathd[210]: IP interface failure detected on bge1 of group shared
The various ipmpstat outputs reflect the failure of bge1 and failover to to bge3, which had been in standby mode, and to bge2. I had expected both IP addresses to end up on bge3. Instead, IPMP determines how to best spread the IPs across the available interfaces.

The address output shows that .141 and .143 are now on bge3.

global# ipmpstat -a
ADDRESS                   STATE  GROUP       INBOUND     OUTBOUND
10.1.14.143               up     ipmp0       bge3        bge3 bge2
10.1.14.142               up     ipmp0       bge2        bge3 bge2
10.1.14.141               up     ipmp0       bge2        bge3 bge2
The group status has changed, with bge1 now shown in brackets as it is in failed mode.
global# ipmpstat -g
GROUP       GROUPNAME   STATE     FDT       INTERFACES
ipmp0       shared      degraded  --        bge3 bge2 [bge1]
The interface status makes it clear that bge1 is down. Broadcast and multicast is now handed by bge2.
global# ipmpstat -i
INTERFACE   ACTIVE  GROUP       FLAGS     LINK      PROBE     STATE
bge3        yes     ipmp0       -s-----   up        disabled  ok
bge2        yes     ipmp0       --mb---   up        disabled  ok
bge1        no      ipmp0       -------   down      disabled  failed
As expected, the only difference in the ifconfig output is for bge1, showing that it is in failed state. The zones are continue to shown using the ipmp0 interface. This took me a little bit of getting used to. Before, ifconfig was sufficient to fully see what the state is. Now, I must use ipmpstat as well.

global# ifconfig -a4
...
bge1: flags=211000803<UP,BROADCAST,MULTICAST,IPv4,FAILED,CoS> mtu 1500 index 3
        inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
        groupname shared
        ether 0:3:ba:e3:42:8c
bge2: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 4
        inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
        groupname shared
        ether 0:3:ba:e3:42:8d
bge3: flags=221000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,STANDBY,CoS> mtu 1500 index 5
        inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
        groupname shared
        ether 0:3:ba:e3:42:8e
ipmp0: flags=8201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS,IPMP> mtu 1500 index 6
        zone shared1
        inet 10.1.14.141 netmask ffffffc0 broadcast 10.1.14.191
        groupname shared
ipmp0:1: flags=8201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS,IPMP> mtu 1500 index 6
        zone shared2
        inet 10.1.14.142 netmask ffffffc0 broadcast 10.1.14.191
ipmp0:2: flags=8201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS,IPMP> mtu 1500 index 6
        zone shared3
        inet 10.1.14.143 netmask ffffffc0 broadcast 10.1.14.191
"Repairing" the interface, things return to normal.
Jan 15 15:13:03 global in.mpathd[210]: The link has come up on bge1
Jan 15 15:13:03 global in.mpathd[210]: IP interface repair detected on bge1 of group shared
Note here only one IP address ended up getting moved back to bge1.
global# ipmpstat -a
ADDRESS                   STATE  GROUP       INBOUND     OUTBOUND
10.1.14.143               up     ipmp0       bge1        bge2 bge1
10.1.14.142               up     ipmp0       bge2        bge2 bge1
10.1.14.141               up     ipmp0       bge2        bge2 bge1
Interface bge3 is back in standby mode.
global# ipmpstat -g
GROUP       GROUPNAME   STATE     FDT       INTERFACES
ipmp0       shared      ok        --        bge2 bge1 (bge3)
All three interfaces are up, only two are active, and broadcast and multicast stayed on bge2 (no need to change that now).
global# ipmpstat -i
INTERFACE   ACTIVE  GROUP       FLAGS     LINK      PROBE     STATE
bge3        no      ipmp0       is-----   up        disabled  ok
bge2        yes     ipmp0       --mb---   up        disabled  ok
bge1        yes     ipmp0       -------   up        disabled  ok
As a further example of rebalancing of the IP address, here is what happens with four IP addresses spread across two interfaces.
global# ipmpstat -a
ADDRESS                   STATE  GROUP       INBOUND     OUTBOUND
10.1.14.144               up     ipmp0       bge2        bge2 bge1
10.1.14.143               up     ipmp0       bge1        bge2 bge1
10.1.14.142               up     ipmp0       bge2        bge2 bge1
10.1.14.141               up     ipmp0       bge1        bge2 bge1

Jan 15 16:19:09 global in.mpathd[210]: The link has gone down on bge1
Jan 15 16:19:09 global in.mpathd[210]: IP interface failure detected on bge1 of group shared

global# ipmpstat -a
ADDRESS                   STATE  GROUP       INBOUND     OUTBOUND
10.1.14.144               up     ipmp0       bge2        bge3 bge2
10.1.14.143               up     ipmp0       bge3        bge3 bge2
10.1.14.142               up     ipmp0       bge2        bge3 bge2
10.1.14.141               up     ipmp0       bge3        bge3 bge2

Jan 15 18:11:35 global in.mpathd[210]: The link has come up on bge1
Jan 15 18:11:35 global in.mpathd[210]: IP interface repair detected on bge1 of group shared

global# ipmpstat -a
ADDRESS                   STATE  GROUP       INBOUND     OUTBOUND
10.1.14.144               up     ipmp0       bge2        bge2 bge1
10.1.14.143               up     ipmp0       bge1        bge2 bge1
10.1.14.142               up     ipmp0       bge2        bge2 bge1
10.1.14.141               up     ipmp0       bge1        bge2 bge1
There is even spreading of the IP addresses across any two active interfaces.

Using the New IPMP Configuration Style

In the previous examples, I used the old style of configuring IPMP with the /etc/hostname.xyzN files. Those files should work on all older versions of Solaris as well as with the re-architecture bits. This section briefly covers the new format.

A new file that is introduced is the hostname.ipmp-group configuration file. It must follow the same format as any other data link configuration, ASCII characters followed by a number. I will use the same group name as above; however, I have to add a number to the end--thus the group name will be shared0. If you don't have the trailing number, the old style of IPMP setup will be used.

I create a file to define the IPMP group. Note that it contains only the keyword ipmp.

global# cat /etc/hostname.shared0
ipmp
The other files for the NICs reference the IPMP group name.

global# cat /etc/hostname.bge1
group shared0 up

global# cat /etc/hostname.bge2
group shared0 up

global# cat /etc/hostname.bge3
group shared0 standby up
One note that may not be obvious. I am not using the keyword -failover as I am not using test addresses. Thus the interfaces are also not listed as deprecated in the ifconfig output.

global# ifconfig -a4
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
shared0: flags=8201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS,IPMP> mtu 1500 index 2
        inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
        groupname shared0
bge0: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 3
        inet 139.164.63.125 netmask ffffff00 broadcast 139.164.63.255
        ether 0:3:ba:e3:42:8b
bge1: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 4
        inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
        groupname shared0
        ether 0:3:ba:e3:42:8c
bge2: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 5
        inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
        groupname shared0
        ether 0:3:ba:e3:42:8d
bge3: flags=261000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,STANDBY,INACTIVE,CoS> mtu 1500 index 6
        inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
        groupname shared0
        ether 0:3:ba:e3:42:8e
After booting the zones, which are still configured to use bge1 or bge2, things look like this.
global# ifconfig -a4
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
lo0:1: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        zone shared1
        inet 127.0.0.1 netmask ff000000
lo0:2: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        zone shared2
        inet 127.0.0.1 netmask ff000000
lo0:3: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        zone shared3
        inet 127.0.0.1 netmask ff000000
shared0: flags=8201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS,IPMP> mtu 1500 index 2
        inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
        groupname shared0
shared0:1: flags=8201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS,IPMP> mtu 1500 index 2
        zone shared1
        inet 10.1.14.141 netmask ffffffc0 broadcast 10.1.14.191
shared0:2: flags=8201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS,IPMP> mtu 1500 index 2
        zone shared2
        inet 10.1.14.142 netmask ffffffc0 broadcast 10.1.14.191
shared0:3: flags=8201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS,IPMP> mtu 1500 index 2
        zone shared3
        inet 10.1.14.143 netmask ffffffc0 broadcast 10.1.14.191
bge0: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 3
        inet 139.164.63.125 netmask ffffff00 broadcast 139.164.63.255
        ether 0:3:ba:e3:42:8b
bge1: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 4
        inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
        groupname shared0
        ether 0:3:ba:e3:42:8c
bge2: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 5
        inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
        groupname shared0
        ether 0:3:ba:e3:42:8d
bge3: flags=261000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,STANDBY,INACTIVE,CoS> mtu 1500 index 6
        inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
        groupname shared0
        ether 0:3:ba:e3:42:8e
global# ipmpstat -a
ADDRESS                   STATE  GROUP       INBOUND     OUTBOUND
10.1.14.143               up     shared0     bge1        bge2 bge1
10.1.14.142               up     shared0     bge2        bge2 bge1
10.1.14.141               up     shared0     bge1        bge2 bge1
0.0.0.0                   up     shared0     --          --

global# ipmpstat -g
GROUP       GROUPNAME   STATE     FDT       INTERFACES
shared0     shared0     ok        --        bge2 bge1 (bge3)

global# ipmpstat -i
INTERFACE   ACTIVE  GROUP       FLAGS     LINK      PROBE     STATE
bge3        no      shared0     is-----   up        disabled  ok
bge2        yes     shared0     -------   up        disabled  ok
bge1        yes     shared0     --mb---   up        disabled  ok
Things are the same as before, except that the I now have specified the IPMP group name (shared0 instead of the previous ipmp0). I find this very useful as the name can help identify the purpose, and when debugging, different IPMP group names using context appropriate text should be very helpful.

I find the integration, or rather the backward compatibility, great. Not only will the old or existing IPMP setup work, the existing zonecfg network setup works as well. This means the same configuration files will work pre- and post-re-architecture!

Let's take a look at how things look within a zone.

shared1# ifconfig -a4
lo0:1: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
shared0:1: flags=8201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS,IPMP> mtu 1500 index 2
        inet 10.1.14.141 netmask ffffffc0 broadcast 10.1.14.191
shared1# netstat -rnf inet

Routing Table: IPv4
  Destination           Gateway           Flags  Ref     Use     Interface
-------------------- -------------------- ----- ----- ---------- ---------
default              10.1.14.129          UG        1          2 shared0
10.1.14.128          10.1.14.141          U         1          0 shared0:1
127.0.0.1            127.0.0.1            UH        1         33 lo0:1
The zone's network is on the link shared0 using a logical IP, and everything else looks as it has always looked. This output is actually while bge1 is down. IPMP hides all the details in the non-global zone.

Using Probe-based Failover

The configurations so far have been with link-based failure detection. IPMP has the ability to do probe-based failure detection, where ICMP packet are sent to other nodes on the system. This allows for failure detection way beyond what link-based detection can do, including the whole switch, and items past it up to and including routers. In order to use probe-based failure detection, test addresses are required on the physical NICs. For my configuration, I use test addresses on a completely different subnet, and my router is another system running Solaris 10. The router happens to be a zone with two NICs and configured as an exclusive IP Instance.

I am using a completely different subnet as I want to isolate the global zone from the non-global zones, and the setup is also using the defrouter zonecfg option, and I don't want to interfere with that setup.

The IPMP setup is as follows. I have added test addresses on the 172.16.10.0/24 subnet, and the interfaces are set to not fail over.

global# cat /etc/hostname.shared0
ipmp

global# cat /etc/hostname.bge1
172.16.10.141/24 group shared0 -failover up

global# cat /etc/hostname.bge2
172.16.10.142/24 group shared0 -failover up

global# cat /etc/hostname.bge3
172.16.10.143/24 group shared0 -failover standby up
This is the state of the system before bringing up any zones.
global# ifconfig -a4
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
shared0: flags=8201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS,IPMP> mtu 1500 index 2
        inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
        groupname shared0
bge0: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 3
        inet 139.164.63.125 netmask ffffff00 broadcast 139.164.63.255
        ether 0:3:ba:e3:42:8b
bge1: flags=209040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,CoS> mtu 1500 index 4
        inet 172.16.10.141 netmask ffffff00 broadcast 172.16.10.255
        groupname shared0
        ether 0:3:ba:e3:42:8c
bge2: flags=209040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,CoS> mtu 1500 index 5
        inet 172.16.10.142 netmask ffffff00 broadcast 172.16.10.255
        groupname shared0
        ether 0:3:ba:e3:42:8d
bge3: flags=269040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,STANDBY,INACTIVE,CoS> mtu 1500 index 6
        inet 172.16.10.143 netmask ffffff00 broadcast 172.16.10.255
        groupname shared0
        ether 0:3:ba:e3:42:8e
The ipmpstat output is different now.
global# ipmpstat -a
ADDRESS                   STATE  GROUP       INBOUND     OUTBOUND
0.0.0.0                   up     shared0     --          --

global# ipmpstat -g
GROUP       GROUPNAME   STATE     FDT       INTERFACES
shared0     shared0     ok        10.00s    bge2 bge1 (bge3)

global# ipmpstat -i
INTERFACE   ACTIVE  GROUP       FLAGS     LINK      PROBE     STATE
bge3        no      shared0     is-----   up        ok        ok
bge2        yes     shared0     -------   up        ok        ok
bge1        yes     shared0     --mb---   up        ok        ok
The Failure Detection Time is now set. And the probe information option lists an ongoing update of the probe results.
global# ipmpstat -p
TIME      INTERFACE   PROBE  NETRTT    RTT       RTTAVG    TARGET
0.14s     bge3        426    0.48ms    0.56ms    0.68ms    172.16.10.16
0.24s     bge2        426    0.50ms    0.98ms    0.74ms    172.16.10.16
0.26s     bge1        424    0.42ms    0.71ms    1.72ms    172.16.10.16
1.38s     bge1        425    0.42ms    0.50ms    1.57ms    172.16.10.16
1.79s     bge2        427    0.54ms    0.86ms    0.76ms    172.16.10.16
1.93s     bge3        427    0.45ms    0.53ms    0.66ms    172.16.10.16
2.79s     bge1        426    0.38ms    0.56ms    1.44ms    172.16.10.16
2.85s     bge2        428    0.34ms    0.41ms    0.71ms    172.16.10.16
3.15s     bge3        428    0.44ms    4.55ms    1.14ms    172.16.10.16
\^C
The target information option shows the current probe targets.
global# ipmpstat -t
INTERFACE   MODE      TESTADDR            TARGETS
bge3        multicast 172.16.10.143       172.16.10.16
bge2        multicast 172.16.10.142       172.16.10.16
bge1        multicast 172.16.10.141       172.16.10.16
Once the zones are up and running and bge1 is down, the status output changes accordingly.
global# ipmpstat -a
ADDRESS                   STATE  GROUP       INBOUND     OUTBOUND
10.1.14.143               up     shared0     bge2        bge3 bge2
10.1.14.142               up     shared0     bge3        bge3 bge2
10.1.14.141               up     shared0     bge2        bge3 bge2
0.0.0.0                   up     shared0     --          --

global# ipmpstat -g
GROUP       GROUPNAME   STATE     FDT       INTERFACES
shared0     shared0     degraded  10.00s    bge3 bge2 [bge1]

global# ipmpstat -i
INTERFACE   ACTIVE  GROUP       FLAGS     LINK      PROBE     STATE
bge3        yes     shared0     -s-----   up        ok        ok
bge2        yes     shared0     --mb---   up        ok        ok
bge1        no      shared0     -------   down      failed    failed

global# ipmpstat -p
TIME      INTERFACE   PROBE  NETRTT    RTT       RTTAVG    TARGET
0.46s     bge2        839    0.43ms    0.98ms    1.17ms    172.16.10.16
1.15s     bge3        840    0.32ms    0.37ms    0.65ms    172.16.10.16
1.48s     bge2        840    0.37ms    0.45ms    1.08ms    172.16.10.16
2.56s     bge3        841    0.45ms    0.54ms    0.63ms    172.16.10.16
3.17s     bge2        841    0.40ms    0.51ms    1.01ms    172.16.10.16
3.93s     bge3        842    0.40ms    0.47ms    0.61ms    172.16.10.16
4.61s     bge2        842    0.63ms    0.75ms    0.98ms    172.16.10.16
5.17s     bge3        843    0.38ms    0.46ms    0.59ms    172.16.10.16
5.72s     bge2        843    0.36ms    0.44ms    0.91ms    172.16.10.16
\^C

global# ipmpstat -t
INTERFACE   MODE      TESTADDR            TARGETS
bge3        multicast 172.16.10.143       172.16.10.16
bge2        multicast 172.16.10.142       172.16.10.16
bge1        multicast 172.16.10.141       172.16.10.16
Without showing the details here, the non-global zones continue to function.

Bringing all three interfaces down, things look like this.

Jan 19 13:51:22 global in.mpathd[61]: The link has gone down on bge2
Jan 19 13:51:22 global in.mpathd[61]: IP interface failure detected on bge2 of group shared0
Jan 19 13:52:04 global in.mpathd[61]: The link has gone down on bge3
Jan 19 13:52:04 global in.mpathd[61]: All IP interfaces in group shared0 are now unusable
global# ipmpstat -a
ADDRESS                   STATE  GROUP       INBOUND     OUTBOUND
10.1.14.143               up     shared0     --          --
10.1.14.142               up     shared0     --          --
10.1.14.141               up     shared0     --          --
0.0.0.0                   up     shared0     --          --

global# ipmpstat -g
GROUP       GROUPNAME   STATE     FDT       INTERFACES
shared0     shared0     failed    10.00s    [bge3 bge2 bge1]

global# ipmpstat -i
INTERFACE   ACTIVE  GROUP       FLAGS     LINK      PROBE     STATE
bge3        no      shared0     -s-----   down      failed    failed
bge2        no      shared0     -------   down      failed    failed
bge1        no      shared0     -------   down      failed    failed

global# ipmpstat -p
\^C

global# ipmpstat -t
INTERFACE   MODE      TESTADDR            TARGETS
bge3        multicast 172.16.10.143       --
bge2        multicast 172.16.10.142       --
bge1        multicast 172.16.10.141       --
The whole IPMP group shared0 is down, all appropriate ipmpstat output reflects that, and no probes are listed nor probe RTT time reports are updated.

An additional scenario might be to have two separate paths, and have something other than a link failure force the failover.

Monday Nov 05, 2007

Using IP Instances with VLANs or How to Make a Few NICs Look Like Many

[Minor editorial and clarification updates 2009.09.28]

Solaris 10 8/07 includes a new feature for zone networking. IP Instances is the facility to give a non-global zone its own complete control over the IP stack, which previously was shared with and controlled by the global zone.

A zone that has an exclusive IP Instance can set interface parameters using ifconfig(1M), put an interface into promiscuous mode to run snoop(1M), be a DHCP client or server, set ndd(1M) variables, have its own IPsec policies, etc.

One requirement for an exclusive IP Instance is that it must have exclusive access to a link name. This is any NIC, VLAN-tagged NIC component, or aggregation at this time. When they become available, virtual NICs will make this much simpler, as a single NIC can be presented to the zones using a number of VNICs, effectively multiplexing access to that NIC. A link name is an entry that can be found in /dev, such as /dev/bge0, /dev/bge321001 (VLAN tag 321 on bge1), aggr2, and so on.

To see what link names are available on a system, use dladm(1M) with the show-link option. For example:

global# dladm show-link
bge0            type: non-vlan  mtu: 1500       device: bge0
bge1            type: non-vlan  mtu: 1500       device: bge1
bge2            type: non-vlan  mtu: 1500       device: bge2
bge3            type: non-vlan  mtu: 1500       device: bge3

As folks have started to use IP Instances to isolate their zones, they have noticed that they don't have sufficient link names (I'll use just link in the rest of this) to assigned to the zones that have or wish to configure as exclusive. So, how does a global zone administrator configure a large number of zones as exclusive?

Let's consider the following situation, where there are three tiers of a web service, where each tier is on a different network.

If each server has only one NIC, the total number of switch ports required is at least eight (8). If each server has a management port, that is another eight ports, even if they are on a different, management network. Add to that at least three three switch ports going to the router.

Consolidating the servers onto a single Solaris 10 instance using exclusive IP Instances requires at least eight NICs for the services (one per service), and at least one for the global zone and management. (We'll ignore a service process requirements, since they are separate anyway, and access could be either via a serial interface or a network.)

One option to consider is using VLANs and VLAN tagging. When using VLAN tagging, additional information is put onto the ethernet frame by the sender which allows the receiver to associated that frame to a specific VLAN. The specification allow up to 4094 VLAN tags, from 1 to 4094. For more information on administering VLANs in Solaris 10, see Administering Virtual Local Area Networks in the Solaris 10 System Administrator Collection.

VLANs is a method to collapse multiple ethernet broadcast domains (whether hubs or switches) into a single network unit (usually a switch). [Typically, a single IP subnet, such as 192.168.54.0/24, is on a broadcast domain. Within such a switch frame, you can have a large number of virtual switches, consolidating network infrastructure and still isolating broadcast domains. Often, the use of VLANs is completely hidden from the systems tied to the switch, as a port on the switch is configured for only one VLAN. With VLAN tagging, a single port can allow a system to connect to multiple VLAns, and therefore multiple networks. Both the switch and the system must be configured for VLAN tagging for this to work properly. VLAN tagging has been used for years, and is robust and reliable.

Any one network interface can have multiple VLANs configured for it, but a single VLAN ID can only exist once on each interface. Thus it is possible to put multiple networks or broadcast domains on a single interface. It is not possible to put more than one VLAN of any broadcast domain on a single interface. For example, you can put VLANs 111, 112, and 113 on interface bge1, but you can not put VLAN 111 on bge1 more than once. You can, however, put VLAN 111 on interfaces bge1 and bge2.

Using the case shown above, if the three web servers are on the same network, say 10.1.111.0/24, you would want to have three interfaces that are all connected to a VLAN capable switch, and configure each interface with a VLAN tag that is the same as the VLAN ID on the switch.

For example, if the VLAN tag is 111 and the interfaces are bge1 through bge3, the link names you would assign to the three web servers would be bge111001, bge111002, and bge111003.

Introducing zones into the setup, the web servers can be run in three separate zones, and with exclusive IP Instances, they can be totally separate and each assigned a VLAN-tagged interface. Web Server 1 could have bge111001, Web Server 2 could have bge111002, and Web Server 3 could have bge111003.

global# zonecfg -z web1 info net
net:
        address not specified
        physical: bge111001

global# zonecfg -z web2 info net
net:
        address not specified
        physical: bge111002

global# zonecfg -z web3 info net
net:
        address not specified
        physical: bge111003

Within the zones, you could configure IP addresses 10.1.111.1/24 through 10.1.111.3/24.

Similarly, for the authentication tier, using VLAN ID 112, you could assign the zones auth1 through auth3 to bge112001, bge112002, and bge112003,respectively. And for application servers app1 and app2 on VLAN ID 113, bge113001 and bge113002. This can be repeated until some limit is reached, whether it is network bandwidth, system resource limits, or the maximum number of concurrent VLANs on either the switch or Solaris.

This configuration could look like the following diagram.

Web Server 1, Auth Server 1, and Application Server 1 share the use of NIC1, yet are all on different VLANs (111, 112, and 113, respectively). The same for instances 2 and 3, except that there is no third application server. All traffic between the three web servers will stay within the switch, as will traffic between the authentication servers. Traffic between the tiers is passed between the IP networks by the router. NICg is showing that the global zone also has a network interface.

Using this technique, the maximum number of zones with exclusive IP Instances you could deploy on a single system that are on the same subnet is limited to the number of interfaces that are capable of doing VLAN tagging. In the above example, with three bge interfaces on the system, the maximum number of exclusive zones on a single subnet would be three. (I have intentionally reserved bge0 for the global zone, but it would be possible to use it as well, making sure the global zone uses a different VLAN ID altogether, such as 1 or 2.)

Wednesday May 30, 2007

In two places at once?

Some background. Like any other mobile workforce, Sun employees have a need to access internal network services while not in the office. While we use commercial products, Sun engineers have also been working on a \*product\* called punchin. Punchin is a Sun-created VPN technology that uses native IPsec/IKE from the operating system in which it runs. It is the primary Solaris VPN solution for Solaris servers and clients, and will be expanding to other operating systems such as MacOS X in the near future.

Security policy states that if a system is 'punched in', it must not be on the public network at the same it. In other words, while the VPN tunnel is up, access to the Internet directly is restricted, especially access from the Internet to the system. While a system is on the VPN, it can not also be your Internet facing personal web server, for example.

Bringing up the VPN is an interactive process, requiring a challenge/response sequence. If you are like me, you may have a system at home and while at work need to access from that system some data on the corporate network. This is a catch-22, since the connection you use remotely to activate the VPN breaks as you start the VPN establishment process (enforcing the policy of being on only one network at a time).

Introduce Solaris Containers, or zones. Each zone looks like its own system. However, they share a single kernel and single IP. But wait, there is this new thing called IP Instance that allows zones configured as having an exclusive IP Instance to have their own IP (they already have their own TCP and UDP for all practical purposes). And wouldn't it be great if I could do this with just one NIC? Hey, Project Crossbow has IP Instances and VNICs. Great!

Now for the reality check. As I was told not so long ago, Rome was not built in one day. IP Instances are in Solaris Nevada and targeted for Solaris 10 7/07. VNICs are only available in a snapshot applied via BFU to Nevada build 61. [See also Note 1 below.]

So, lets see how to do this with just IP Instances.

First, since each instance, which are at least the global zone and one non-global need their own NIC, I need at least two NICs. Not all NICs support IP Instances, so the one(s) for the non-global zone(s) need to support IP Instances, and thus must be using GLDv3 drivers.

In my case, I am using a Sun Blade 100 with an on-board eri 100Mbps Ethernet interface. I purchased an Intel 1000/Pro MT Server NIC, which requires an e1000g driver. Here is a list of NICs that are known to work with IP Instances and VNICs.

After installing Solaris Nevada, I created my non-global zone with the following configuration:

global# zonecfg -z vpnzone info
zonename: vpnzone
zonepath: /zones/vpnzone
brand: native
autoboot: true
bootargs: 
pool: 
limitpriv: 
scheduling-class:
ip-type: exclusive
inherit-pkg-dir:
        dir: /lib
inherit-pkg-dir:
        dir: /platform
inherit-pkg-dir:
        dir: /sbin
inherit-pkg-dir:
        dir: /usr
inherit-pkg-dir:
        dir: /etc/crypto/certs
fs:
        dir: /usr/local
        special: /zones/vpnzone/usr-local
        raw not specified
        type: lofs
        options: []
net:
        address not specified
        physical: e1000g0
global#
I had to include an additional inherit directive for this sparse, because currently some of the crypto stuff is not duplicated into a non-global zone. Without this, even the digest command would fail, for example. I needed to provide a private directory for /usr/local since that is where the Punchin packages get installed by default.

Once I installed and configured vpnzone, I was able to install and configure the Punchin client.

However, this required two NICs. So to use just one, I created a VNIC for my VPN zone.

global# dladm show-dev
eri0                 link: unknown   speed:       0Mb  duplex: unknown
e1000g0         link: up             speed:   100Mb  duplex: full
global# dladm show-link
eri0                 type: legacy     mtu: 1500       device: eri0
e1000g0         type: non-vlan  mtu: 1500       device: e1000g0
global# dladm create-vnic -d e1000g0 -m 0:4:23:e0:5f:1 1
global# dladm show-link
eri0                 type: legacy     mtu: 1500       device: eri0
e1000g0         type: non-vlan  mtu: 1500       device: e1000g0
vnic1               type: non-vlan  mtu: 1500       device: vnic1
global# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
e1000g0: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 2
        inet 192.168.1.58 netmask ffffff00 broadcast 192.168.1.255
        ether 0:4:23:e0:5f:6b
global# 
I chose to provide my on MAC address, based on the address of the base NIC.

I modified the non-global zone configuration:

global# zonecfg -z vpnzone info
zonename: vpnzone
zonepath: /zones/vpnzone
brand: native
autoboot: true
bootargs: 
pool: 
limitpriv: 
scheduling-class:
ip-type: exclusive
inherit-pkg-dir:
        dir: /lib
inherit-pkg-dir:
        dir: /platform
inherit-pkg-dir:
        dir: /sbin
inherit-pkg-dir:
        dir: /usr
inherit-pkg-dir:
        dir: /etc/crypto/certs
fs:
        dir: /usr/local
        special: /zones/vpnzone/usr-local
        raw not specified
        type: lofs
        options: []
net:
        address not specified
        physical: vnic1
global#
Now I can access the system at home while I am not there, zlogin into vpnzone, punchin, and be connected to our internal network. This is really significant for me, since at home I have 6Mbps download compared to only 600Kbps in the office. So downloading the DVD ISO that I used to create this setup took 1/10th the time at home than at work.

[1] I also used the SUNWonbld package. This package is specific to build 61!

Because I install BFUs a lot, I have added the following to my .profile

if [ -d /opt/onbld ]
then
   FASTFS=/opt/onbld/bin/`uname -p`/fastfs ; export FASTFS
   BFULD=/opt/onbld/bin/`uname -p`/bfuld ; export BFULD
   GZIPBIN=/usr/bin/gzip ; export GZIPBIN
   PATH=$PATH:/opt/onbld/bin
fi

Saturday May 12, 2007

Network performance differences within an IP Instance vs. across IP Instances

When consolidating or co-locating multiple applications on the same system, inter-application network typically stays within the system, since the shared IP in the kernel recognizes that the destination address is on the same system, and thus loops it back up the stack without ever putting the data on a physical network. This has introduced some challenges for customers deploying Solaris Containers (specifically zones) where different Containers are on different subnets, and it is expected that traffic between them leaves the system (maybe through a router or fireall to restrict or monitor inter-tier traffic).

With IP Instances in Solaris Nevada build 57 and targeted for Solaris 10 7/07, there is the ability to configures zones with exclusive IP Instances, thus forcing all traffic leaving a zone out onto the network. This introduces additional network stack processing both on the transmit and the receive. Prompted by some customer questions regarding this, I performed a simple test to measure the difference.

On two systems, a V210 with two 1.336GHz CPUs and 8GB memory, and an x4200 with two dual-core Opteron XXXX and 8GB memory, I ran FTP transfers between zones. My switch is a Netgear GS716T Smart Switch with 1Gbps ports. The V210 has four bge interfaces and the x4200 has four e1000g interfaces.

I created four zones. Zones x1 and x2 have eXclusive IP Instances, while zones s1 and s2 have Shared IP Instances (IP is shared with the global zone). Both systems are running Solaris 10 7/07 build 06.

Relevant zonecfg info is a follows (all zones are sparse):


v210# zonecfg -z x1 info
zonename: x1
zonepath: /localzones/x1
...
ip-type: exclusive
net:
        address not specified
        physical: bge1

v210# zonecfg -z s1 info
zonename: s1
zonepath: /localzones/s1
...
ip-type: shared
net:
        address: 10.10.10.11/24
        physical: bge3
 
As a test user in each zone, I created a file using 'mkfile 1000m /tmp/file1000m'. Then I used ftp to transfer it between zones. No tuning was done whatsoever.

The results are as follows.

V210: (bge)

Exclusive to Exclusive
x1# /usr/bin/time ftp x2 << EOF\^Jcd /tmp\^Jbin\^Jput file1000m\^JEOF

real       17.0
user        0.2
sys        11.2

Exclusive to Shared
x1# /usr/bin/time ftp s2 << EOF\^Jcd /tmp\^Jbin\^Jput file1000m\^JEOF

real       17.3
user        0.2
sys        11.6

Shared to Shared
s2# /usr/bin/time ftp s1 << EOF\^Jcd /tmp\^Jbin\^Jput file1000m\^JEOF

real        6.6
user        0.1
sys         5.3


X4200: (e1000g)

Exclusive to Exclusive
x1# /usr/bin/time ftp x2 << EOF\^Jcd /tmp\^Jbin\^Jput file1000m\^JEOF

real        9.1
user        0.0
sys         4.0

Exclusive to Shared
x1# /usr/bin/time ftp s2 << EOF\^Jcd /tmp\^Jbin\^Jput file1000m\^JEOF

real        9.1
user        0.0
sys         4.1

Shared to Shared
s2# /usr/bin/time ftp s1 << EOF\^Jcd /tmp\^Jbin\^Jput file1000m\^JEOF

real        4.0
user        0.0
sys         3.5
I ran each test several times and picked a result that seemed average across the runs. Not very scientific, and a table might be nicer.

Something I noticed that surprised me was that time spent in IP and the driver is measurable on the V210 with bge, and much less so on the x4200 with e1000g.

About

stw

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today