X

Jeff Taylor's Weblog

  • Sun
    June 24, 2007

Configuring jumbo frames on the V490’s ce and the T2000's e1000g

Configuring jumbo frames on the V490’s ce and the T2000's e1000g

I was hoping that using Solaris 10's jumbo frames would reduce Windchill's latency as seen by end-user's.  I was surprised at how difficult it was to find consistent documentation and, in the end.  disappointed that it did make too much difference.  I am writing this blog in hope that it may help others to configure jumbo frames more quickly for similar experiments.

As stated by Wikipedia,, "In computer networking, jumbo frames are Ethernet frames above 1518 bytes in size. The most commonly supported implementations of hardware support for jumbo frames have a MTU of 9000 bytes. Jumbo frames, while sometimes used on a LAN, are rarely used when exchanging data, especially over the Internet." 

In my test, I wanted to use jumbo frames for links "inside the data center".  Specifically, for (1) the Sun Cluster interconnect between two V490 Sun Cluster Oracle RAC nodes,  and (2) the communication between the T2000 application tier servers and the V490 database servers.  The hope was that I would see a substantial reduction in the system time (i.e. CPU time inside the kernel) when few IO operations were required for a given payload.  I did not attempt to use jumbo frames  for the final HTML traffic to the end-users who would be "outside of the data center"..

The V490's Sun Cluster interconnect was via crossover cables, so the was no limit imposed by a router on the MTU size. The router between the application servers and the database servers advertised that it supported "8k" frames, so I set the MTU to less than 8192 bytes.

A) Configuring jumbo frames on the V490's

1) Find the path to the device: 

# grep ce /etc/path_to_inst

"/scsi_vhci/ssd@g60020f20000063f0438c7cce0006cd71"
14 "ssd"

"/pci@8,700000/pci@2/network@0"
2 "ce"

"/pci@8,700000/pci@2/network@1"
3 "ce"

"/pci@9,700000/network@2" 0
"ce"

"/pci@9,600000/network@1" 1
"ce"

2) Configure the ce driver:

# cat
/platform/sun4u/kernel/drv/ce.conf

name="pci108e,abba"
parent="/pci@8,700000/pci@2" unit-address="0"
accept-jumbo=1;

name="pci108e,abba"
parent="/pci@8,700000/pci@2" unit-address="1"
accept-jumbo=1;

name="pci108e,abba"
parent="/pci@9,700000" unit-address="2"
accept-jumbo=1;

name="pci108e,abba"
parent="/pci@9,600000" unit-address="1"
accept-jumbo=1;

3) set the MTU in the hostname file:

# cat /etc/hostname.ce0

scnode1 mtu 8168 group sc_ipmp0
-failover

4) Verify after reboot:

# ifconfig -a | grep mtu

lo0:
flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL>
mtu 8232 index 1

ce0:
flags=1009000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,NOFAILOVER,FIXEDMTU>
mtu 8168 index 2

ce2:
flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu
9194 index 4

ce3:
flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu
9194 index 3

clprivnet0:
flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4>
mtu 9194 index 5


B) Configuring jumbo frames on the T2000’s:

1) Configure the e1000g driver

# grep MaxF /kernel/drv/e1000g.conf

MaxFrameSize=2,2,2,2,0,0,0,0,0,0,0,0,0,0,0,0;

2) Reboot and verify:

# ifconfig -a

...

e1000g0:
flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 8168
index 2


Join the discussion

Comments ( 3 )
  • Derek Morr Monday, June 25, 2007
    There's a simpler way to enable jumbo frames for ce. In /platform/sun4u/kernel/drv/ce.conf
    just set "accept-jumbo = 1;" -- that will enable jumbo frames for all ce interfaces in the system.
    But, yes, I agree that Solaris definitely needs a consistent way of enabling jumbo frames. It's unacceptable that ce, bge, ipge, and e1000g all have different ways of enabling support!
  • Al Jurgensen Tuesday, May 13, 2008

    Did you mean jumbo frames "didn't make too much difference" ? Did it improve RAC performance?


  • crazySol Wednesday, September 23, 2009

    Yeah I agree but what about bnx? I couldn't find any way that worked for me with BCM5709 and BCM5721


Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.