Thursday Apr 16, 2009

What happened to my packets? -- or -- Dual default routes and shared IP zones

I recently received a call from someone who has helped me out a lot on some performance issues (thanks, Jim Fiori), and I was glad to be able to return even a small part of those favors!

He had been contacted to help a customer who was ready to deploy a web application, and they were experiencing intermittent lack of connection to the web site. Interestingly, they were also using zones, a bunch of them (OK, a handful)--and so right up my alley.

The customer was running a multi-tiered web application on an x4600 (so Solaris on x86 as well!), with the web server, web router, and application tiers in different zones. They were using shared IP Instances, so all the network configuration was being done in the global zone.

Initially, we had to modify some configuration parameters, especially regarding default routes. Since the system was installed with Solaris 10 5/08 and had more recent patches, we could use the defrouter feature introduced in 10/08 to make setting up routes for the non-global zones a little easier. This was needed because the global zone was using only one NIC, and it was not going to be on the networks that the non-global zones were on.

What made the configuration a little unique was that the web server needs a default router to the Internet, while the application server needs a route to other systems behind a different router. Individually, everything is fine. However, the web1 zone also needs to be on the network that the application and web router are on, so it ends up having two interfaces.

Lets look at web1 when only it is running.

web1# ifconfig -a4
lo0:1: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
bge1:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
        inet 172.16.1.41 netmask ffffff00 broadcast 172.16.1.255
bge2:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
        inet 192.168.51.41 netmask ffffff00 broadcast 192.168.51.255
web1# netstat -rn
Routing Table: IPv4
  Destination           Gateway           Flags  Ref     Use     Interface
-------------------- -------------------- ----- ----- ---------- ---------
default              172.16.1.1           UG        1          0 bge1
172.16.1.0           172.16.1.41          U         1          0 bge1:1
192.168.51.0         192.168.51.41        U         1          0 bge2:1
224.0.0.0            172.16.1.41          U         1          0 bge1:1
127.0.0.1            127.0.0.1            UH        5         34 lo0:1

The zone is on two interface, bge1 and bge2, and has a default route that uses bge1. However, when zone app1 is running, there is a second default route, on bge2. The same is true if app2 or odr are running. Note that these three zones are only on bge2.

app1# ifconfig -a4
lo0:1: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
bge2:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
        inet 192.168.51.43 netmask ffffff00 broadcast 192.168.51.255
app1# netstat -rn
Routing Table: IPv4
  Destination           Gateway           Flags  Ref     Use     Interface
-------------------- -------------------- ----- ----- ---------- ---------
default              192.168.51.1         UG        1          0 bge2
192.168.51.0         192.168.51.43        U         1          0 bge2:1
224.0.0.0            192.168.51.43        U         1          0 bge2:1
127.0.0.1            127.0.0.1            UH        3         51 lo0:1

In the meantime, this is what happens in web1.

web1# netstat -rn

Routing Table: IPv4
  Destination           Gateway           Flags  Ref     Use     Interface
-------------------- -------------------- ----- ----- ---------- --------- 
default              192.168.51.1         UG        1          0 bge2
default              172.16.1.1           UG        1          0 bge1 
172.16.1.0           172.16.1.41          U         1          0 bge1:1
192.168.51.0         192.168.51.41        U         1          0 bge2:4
224.0.0.0            172.16.1.41          U         1          0 bge1:1
127.0.0.1            127.0.0.1            UH        6        132 lo0:4

With any of the other zones running, web1 now has two default routes. And it only happens in web1, as it is the only zone with its public facing data link bge1 and a shared data link (bge2).

Traffic to any system on either the 192.168.51.0 or 172.16.1.1 network will have no issues. Every time IP needs to determine a new path for a system not on either of those two networks, it will pick a route, and it will round-robin between the two default routes. Thus approximately half the time, connections will fail to establish, or possibly existing connections will not work if they have been idle for a while.

This is how IP is supposed to work, so there is technically nothing wrong. It is a features of zones and a shared IP Instance. [2009.06.23: For background on why IP works this way, see James' blog].

The only problem is that this is not what the customer wants!

One option would be to force all traffic between the web and application tier out the bge1 interface, putting it on the wire. This may not be desirable for security reasons, and introduces latencies since traffic now goes on the wire. Another option would be to use exclusive IP Instances for the web servers. For each web zone, and this example only has one, it would required two additional data links (NICs). That would add up. Also, this configuration is targeted to be used with Solaris Cluster's scalable services, and those must be in shared IP Instance zones. Hummm....as I like to say.

We didn't know about the shared IP Instance restriction of Solaris Cluster, and as the customer was considering how they were going to add additional NICs to all the systems, something slowly developed in my mind. How about creating a shared, dummy network between the web and application tier? They had one spare NIC, and with shared IP it does not even need to be connected to a switch port, since IP will loop all traffic back anyway!

The more I thought about it, the more I liked it, and I could not see anything wrong with it. At least not technically as I understood Solaris. Operationally, for the customer, it might be a little awkward.

Here is what I was thinking of...

With this configuration the web1 zone has a default router only to the Internet and it can reach odr, and if necessary, app1 and app2, directly via the new network. And app1 and app2 only have a single default route to get to the Intranet. The nice thing is that bge3 does not even need to be up. That is visible with ifconfig output, where bge3 is not showing a RUNNING flag, which indicates the port is not connected (or in my case has been disabled on the switch).

global# ifconfig -a4
...
bge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
        inet 129.154.53.125 netmask ffffff00 broadcast 129.154.53.255
        ether 0:3:ba:e3:42:8b
bge1: flags=1000842<BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
        inet 0.0.0.0 netmask 0
        ether 0:3:ba:e3:42:8c
bge2: flags=1000842<BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
        inet 0.0.0.0 netmask 0
        ether 0:3:ba:e3:42:8d 
bge3: flags=1000802<BROADCAST,MULTICAST,IPv4> mtu 1500 index 5 
        inet 0.0.0.0 netmask 0
        ether 0:3:ba:e3:42:8e
...
And within web1 there is now only one default route.
web1# netstat -rn

Routing Table: IPv4
  Destination           Gateway           Flags  Ref     Use     Interface
-------------------- -------------------- ----- ----- ---------- --------- 
default              172.16.1.1           UG        1         17 bge1 
172.16.1.0           172.16.1.41          U         1          2 bge1:1
192.168.52.0         192.168.52.41        U         1          2 bge3:1
224.0.0.0            172.16.1.41          U         1          0 bge1:1
127.0.0.1            127.0.0.1            UH        4        120 lo0:1
In the customer's case, multiple systems were being used, so the private networks were connected together so that a web zone on one system could access an odr zone on another. I am showing the simple, single system case since it is so convenient.

If I were using Solaris Express Community Edition (SX-CE) or OpenSolaris 2009.06 Developer Builds, with the Crossbow bits and virtual NICs (VNICs) available, I wouldn't even have needed to use that physical interface. Both are available here.

I hope this trick might help others out in the future.

Steffen

Tuesday Jan 13, 2009

Using zonecfg defrouter with shared-IP zones

[Update to IPMP testing 2009.01.20]

[Minor update 2009.01.14]

When running Solaris Zones in a shared-IP configuration, all network configurations are determined by how the zone is configured using zonecfg(1M) or by what the global zone's IP determines things should be (such as routes). This has caused some trouble in situations where zones are on different subnets, and especially if the global zone is not on the subnet(s) the non-global zones are on. While exclusive IP Instances were delivered to help address these cases, using exclusive IP Instances requires a data link per zone, and if running a large number of zones there may not be enough data links available.

With Solaris 10 10/08 (Update 6), an additional network configuration parameter is available for shared-IP zones. This is the default router (defrouter) optional parameter.

Using the defrouter parameter, it is possible to set which router to use for traffic leaving the zone. In the global zone, default router entries are created the first time the zone is booted. Note that the entries are not deleted when the zone is halted.

The defrouter property looks like this for a zone with it configured.

global# zonecfg -z shared1 info net
net:
        address: 10.1.14.141/26
        physical: bge1
        defrouter: 10.1.14.129
And it looks like this if it is not set.
global# zonecfg -z shared1 info net
net:
        address: 10.1.14.141/26
        physical: bge1
        defrouter not specified
So I have run a variety of configurations, and some thing I observed are as follows. (Most of the configurations used a separate interface for the global zone (bge0) than for the non-global zones (bge1 and bge2). IPMP is not being used in these configurations. A comment on that at the end.) The [#] indicate examples in the outputs that follow.
  • A default route entry is create for the NIC [1] on which the zone is configured when the zone is booted. [2]
  • Entries are not deleted when a zone is halted. They persist until manually removed[3] or a reboot of the global zone.
  • It is possible to have the same default router configured for multiple zones. [4]
  • It is possible to have the same default router listed on multiple interfaces. \* [5]
  • It is possible to have multiple default routers on the same interface, even on different IP subnets. [6]
  • The interface used for outbound traffic is the one the zone is assigned to. [7]
  • It is sufficient to plumb the interface for the non-global zones in the global zone (thus it has 0.0.0.0 as its IP address in the global zone). [8]
  • The physical interface can be down in the global zone. [9]
  • If only one interface is used, and different subnets for the global and non-global zones are configured, routing works when setting defrouter [10] and does not work if it is not set.
The most interesting thing I noticed was that although two non-global zones may be on the same IP subnet, if they are configured on different interfaces, the traffic leaves the system on the interface that the zone is configured to be on. This is not the case typically when using shared IP and also having an IP address for the subnet in the global zone.

\* Note: Having two interfaces on the same IP subnet without configuring IP Multipathing (IPMP) may not be a supported configuration. I am looking for documentation that states this one way or another. [2009.01.14]

Examples

1. Single Zone, Single Interface--The Basics

Create a single non-global zone.
global# netstat -rn

Routing Table: IPv4
  Destination           Gateway           Flags  Ref     Use     Interface
-------------------- -------------------- ----- ----- ---------- ---------
default              139.164.63.215       UG        1          2 bge0
139.164.63.0         139.164.63.125       U         1          1 bge0
224.0.0.0            139.164.63.125       U         1          0 bge0
127.0.0.1            127.0.0.1            UH        1         42 lo0

global# zonecfg -z shared1 info net
net:
        address: 10.1.14.141/26
        physical: bge1
        defrouter: 10.1.14.129

global# zoneadm -z shared1 boot [2]

global# netstat -rn

Routing Table: IPv4
  Destination           Gateway           Flags  Ref     Use     Interface
-------------------- -------------------- ----- ----- ---------- ---------
default              139.164.63.215       UG        1          2 bge0
default              10.1.14.129          UG        1          0 bge1 [1]
139.164.63.0         139.164.63.125       U         1          1 bge0
224.0.0.0            139.164.63.125       U         1          0 bge0
127.0.0.1            127.0.0.1            UH        1         42 lo0

global# zoneadm -z shared1 halt

global# zoneadm list -v
  ID NAME             STATUS     PATH                           BRAND    IP
   0 global           running    /                              native   shared

global# netstat -rn

Routing Table: IPv4
  Destination           Gateway           Flags  Ref     Use     Interface
-------------------- -------------------- ----- ----- ---------- ---------
default              10.1.14.129          UG        1          0 bge1
default              139.164.63.215       UG        1          1 bge0
139.164.63.0         139.164.63.125       U         1          1 bge0
224.0.0.0            139.164.63.125       U         1          0 bge0
127.0.0.1            127.0.0.1            UH        1         42 lo0

global# route delete default 10.1.14.129 [3]
delete net default: gateway 10.1.14.129

global# netstat -rn

Routing Table: IPv4
  Destination           Gateway           Flags  Ref     Use     Interface
-------------------- -------------------- ----- ----- ---------- ---------
default              139.164.63.215       UG        1          1 bge0
139.164.63.0         139.164.63.125       U         1          1 bge0
224.0.0.0            139.164.63.125       U         1          0 bge0
127.0.0.1            127.0.0.1            UH        1         42 lo0

2. Multiple Interfaces, Same Default Router

Three zones, where two use bge1 and the third uses bge2. All use the same default router.
global# netstat -rn

Routing Table: IPv4
  Destination           Gateway           Flags  Ref     Use     Interface
-------------------- -------------------- ----- ----- ---------- ---------
default              139.164.63.215       UG        1          1 bge0
139.164.63.0         139.164.63.125       U         1          1 bge0
224.0.0.0            139.164.63.125       U         1          0 bge0
127.0.0.1            127.0.0.1            UH        1         42 lo0

global# zonecfg -z shared1 info net
net:
        address: 10.1.14.141/26
        physical: bge1
        defrouter: 10.1.14.129 [4]

global# zonecfg -z shared2 info net
net:
        address: 10.1.14.142/26
        physical: bge1
        defrouter: 10.1.14.129 [4]

global# zonecfg -z shared3 info net
net:
        address: 10.1.14.143/26
        physical: bge2
        defrouter: 10.1.14.129 [5]

global# zoneadm -z shared1 boot

global# zoneadm -z shared2 boot

global# zoneadm -z shared3 boot

global# netstat -rn

Routing Table: IPv4
  Destination           Gateway           Flags  Ref     Use     Interface
-------------------- -------------------- ----- ----- ---------- ---------
default              10.1.14.129          UG        1          0 bge1 [4]
default              139.164.63.215       UG        1          1 bge0
default              10.1.14.129          UG        1          2 bge2 [5]
139.164.63.0         139.164.63.125       U         1          1 bge0
224.0.0.0            139.164.63.125       U         1          0 bge0
127.0.0.1            127.0.0.1            UH        1         42 lo0

global# zoneadm list -v
  ID NAME             STATUS     PATH                           BRAND    IP
   0 global           running    /                              native   shared
   3 shared1          running    /zones/shared1                 native   shared
   4 shared2          running    /zones/shared2                 native   shared
   5 shared3          running    /zones/shared3                 native   shared

global# ifconfig -a4
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
lo0:1: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        zone shared1
        inet 127.0.0.1 netmask ff000000
lo0:2: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        zone shared2
        inet 127.0.0.1 netmask ff000000
lo0:3: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        zone shared3
        inet 127.0.0.1 netmask ff000000
bge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
        inet 139.164.63.125 netmask ffffff00 broadcast 139.164.63.255
        ether 0:3:ba:e3:42:8b
bge1: flags=1000842<BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
        inet 0.0.0.0 netmask 0
        ether 0:3:ba:e3:42:8c
bge1:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
        zone shared1
        inet 10.1.14.141 netmask ffffffc0 broadcast 10.1.14.191
bge1:2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
        zone shared2
        inet 10.1.14.142 netmask ffffffc0 broadcast 10.1.14.191
bge2: flags=1000842<BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
        inet 0.0.0.0 netmask 0
        ether 0:3:ba:e3:42:8d
bge2:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
        zone shared3
        inet 10.1.14.143 netmask ffffffc0 broadcast 10.1.14.191

3. Multiple Subnets

Add another zone, using bge2 and on a different subnet.
global# zonecfg -z shared4 info net
net:
        address: 192.168.16.144/24
        physical: bge2
        defrouter: 192.168.16.129

global# zoneadm -z shared4 boot

global# netstat -rn

Routing Table: IPv4
  Destination           Gateway           Flags  Ref     Use     Interface
-------------------- -------------------- ----- ----- ---------- ---------
default              10.1.14.129          UG        1          0 bge1
default              10.1.14.129          UG        1          4 bge2
default              139.164.63.215       UG        1          3 bge0
default              192.168.16.129       UG        1          0 bge2 [6]
139.164.63.0         139.164.63.125       U         1          4 bge0
224.0.0.0            139.164.63.125       U         1          0 bge0
127.0.0.1

4. Interface Usage

Issue some pings within the non-global zones and see which network interfaces are used. From the global zone, I issue a ping to a remote system (on the same network as the global zone (139.164.63.0), and see which interfaces are being used. [7]
global# zlogin shared1 ping 139.164.63.38
139.164.63.38 is alive

global# zlogin shared2 ping 139.164.63.38
139.164.63.38 is alive

global# zlogin shared3 ping 139.164.63.38
139.164.63.38 is alive

global# zlogin shared4 ping 139.164.63.38
139.164.63.38 is alive
This shows the pings originating from shared1 and shared2 going out on bge1.
global1# snoop -d bge1 icmp
Using device /dev/bge1 (promiscuous mode)
 10.1.14.141 -> 139.164.63.38 ICMP Echo request (ID: 4677 Sequence number: 0)
139.164.63.38 -> 10.1.14.141  ICMP Echo reply (ID: 4677 Sequence number: 0)
 10.1.14.142 -> 139.164.63.38 ICMP Echo request (ID: 4681 Sequence number: 0)
139.164.63.38 -> 10.1.14.142  ICMP Echo reply (ID: 4681 Sequence number: 0)
And this shows the pings originating from shared3 and shared4 going out on bge2.
global2# snoop -d bge2 icmp
Using device /dev/bge2 (promiscuous mode)
 10.1.14.143 -> 139.164.63.38 ICMP Echo request (ID: 4685 Sequence number: 0)
139.164.63.38 -> 10.1.14.143  ICMP Echo reply (ID: 4685 Sequence number: 0)
192.168.16.144 -> 139.164.63.38 ICMP Echo request (ID: 4689 Sequence number: 0)
139.164.63.38 -> 192.168.16.144 ICMP Echo reply (ID: 4689 Sequence number: 0)
Just to confirm where each zone is configured, here is the ifconfig output.
global# ifconfig -a4
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
lo0:1: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        zone shared1
        inet 127.0.0.1 netmask ff000000
lo0:2: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        zone shared2
        inet 127.0.0.1 netmask ff000000
lo0:3: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        zone shared3
        inet 127.0.0.1 netmask ff000000
lo0:4: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        zone shared4
        inet 127.0.0.1 netmask ff000000
bge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
        inet 139.164.63.125 netmask ffffff00 broadcast 139.164.63.255
        ether 0:3:ba:e3:42:8b
bge1: flags=1000842<BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3 [9]
        inet 0.0.0.0 netmask 0 [8]
        ether 0:3:ba:e3:42:8c
bge1:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
        zone shared1
        inet 10.1.14.141 netmask ffffffc0 broadcast 10.1.14.191
bge1:2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
        zone shared2
        inet 10.1.14.142 netmask ffffffc0 broadcast 10.1.14.191
bge2: flags=1000842<BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
        inet 0.0.0.0 netmask 0 [8]
        ether 0:3:ba:e3:42:8d
bge2:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
        zone shared3
        inet 10.1.14.143 netmask ffffffc0 broadcast 10.1.14.191
bge2:2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
        zone shared4
        inet 192.168.16.144 netmask ffffff00 broadcast 192.168.16.255

5. Using a Single Interface

Only using bge0 and using different subnets for the global and non-global zones. [10]

Before booting the zone.

global# netstat -nr

Routing Table: IPv4
  Destination           Gateway           Flags  Ref     Use     Interface
-------------------- -------------------- ----- ----- ---------- ---------
default              139.164.63.215       UG        1          2 bge0
139.164.63.0         139.164.63.125       U         1          2 bge0
224.0.0.0            139.164.63.125       U         1          0 bge0
127.0.0.1            127.0.0.1            UH        1         42 lo0

global# zonecfg -z shared17 info net
net:
        address: 192.168.17.147/24
        physical: bge0
        defrouter: 192.168.17.16

global# zoneadm -z shared17 boot
Once the zone is booted, netstat shows both default routes, and a ping from the zone works.
global# netstat -rn

Routing Table: IPv4
  Destination           Gateway           Flags  Ref     Use     Interface
-------------------- -------------------- ----- ----- ---------- ---------
default              139.164.63.215       UG        1          2 bge0
default              192.168.17.16        UG        1          0 bge0
139.164.63.0         139.164.63.125       U         1          2 bge0
224.0.0.0            139.164.63.125       U         1          0 bge0
127.0.0.1            127.0.0.1            UH        1         42 lo0

global# ifconfig -a4
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
lo0:1: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        zone shared17
        inet 127.0.0.1 netmask ff000000
bge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
        inet 139.164.63.125 netmask ffffff00 broadcast 139.164.63.255
        ether 0:3:ba:e3:42:8b
bge0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
        zone shared17
        inet 192.168.17.147 netmask ffffff00 broadcast 192.168.17.255

global# zlogin shared17 ping 139.164.63.38
139.164.63.38 is alive

IP Multipathing (IPMP)

I did some testing with IPMP and similar examples as above. At this time the combination of IPMP and the defrouter configuration does not work. I have filed bug 6792116 to have this looked at.

[Updated 2009.01.20] After some addtional testing, especially with test addresses and probe based failure detection, I have seen IPMP work well only when zones are configured such that at least one zone is on each NIC in an IPMP group, including a standby NIC. For example, if you have two NICs, bge1 and bge2, at least one zone must be configured on bge1 and at least one on bge2. This is even the case when one of the NICs is in failed mode when the system or zone(s) boot. It turns out that the default route is added when the zone boot, and there is no later check for default route requirements as a zone is moved from one NIC to another based on IPMP failover or failback. Thus, I would recommend not using defrouter and IPMP together until the conbination is confirmed to work.

If this is important for your deployments, please add a service record to change request 6792116 and work with your service provide to have this addressed. Please also note that this works well with the IPMP Re-architecture coming soon to OpenSolaris.

Thursday Feb 14, 2008

Network Virtualization and Resource Control--Crossbow pre-Beta

The pre-beta bits and updated material for Project Crossbow have been posted to the opensolaris.org web site. If splitting up a NIC into several virtual NICs, limiiting network bandwidth, allocating CPUs to specific network traffic, faster datagram forwarding, or enhance visibility of what your network traffic looks like, check it out.

The code is available as a customized Nevada build 81 image, or you can install the BFU bits on top of an existing build 81 install. It may work with a slightly older or newer build (I did some testing with build 82), but that has not been fully tested.

Plans are to put the features into Nevada after the beta period and your feedback.

Thanks to the engineering for all the effort in getting this out! Many of my customers have been waiting for this to become available.

Patches for Using IP Instances with ce NICs are Available

The [Solaris 10] patches to be able to use IP Instances with the Cassini ethernet interface, known as ce, are available on sunsolve.sun.com for Solaris 10 users with a maintenance contract or subscription. (This is for Solaris 10 8/07, or a prior update patched to that level. These patches are included in Solaris 10 5/08, and also in patch clusters or bundles delivered at or around the same time, and since then.)

The SPARC patches are:

  • 137042-01 SunOS 5.10: zoneadmd patch
  • 118777-12 SunOS 5.10: Sun GigaSwift Ethernet 1.0 driver patch

The x86 patches are:

  • 137043-01 SunOS 5.10_x86: zoneadmd patch
  • 118778-11 SunOS 5.10_x86: Sun GigaSwift Ethernet 1.0 driver patch

I have not been able to try out the released patches myself, yet.

Steffen

Wednesday Jan 30, 2008

IP Instances with ce NICs patches are in progress!

The patches for the ON (OS and networking) part of the changes to allow IP Instances to work with the ce NICs (CR 6616075) are in progress. The patch numbers will be:

137042-01 (SPARC)
137043-01 (i386, x86, x64)

The patches should be available in about two weeks, after final internal and customre testing. If you have a service contract, you can get a temporary T-patch as interim relief, with all the caveats of a T-patch. Folks with an escalation should already have been notified. The fix will also be delivered in the next update of Solaris 10. It did not make the Beta of that update (Update 5), however. Don't forget, you also need the ce patch:

118777-12 (SPARC)
118778-11 (i386, x86, x64)

Happy IP-Instancing with ce!!

About

stw

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today