Monday Nov 03, 2008

Examining Large Segment Offload (LSO) in the Solaris Networking Stack

In this blog article, I will share my experience on Large Segment Offload (LSO), one of the recent additions to the Solaris network stack. I will discuss a few observability tools, and also what helps achieve better LSO.

LSO saves valuable CPU cycles by allowing the network protocol stack to handle large segments instead of the traditional model of MSS sized segments. In the traditional network stack, the TCP layer segments the outgoing data into the MSS sized segments and passes them down to the driver. This becomes computationally expensive with 10 GigE networking because of the large number of kernel functional calls required for every MSS segment. With LSO, a large segment is passed by TCP to the driver, and the driver or NIC hardware does the job of TCP segmentation. An LSO segment may be as large as 64 KByte. The larger the LSO segment, better the CPU efficiency since the network stack has to work with smaller number of segments for the same throughput. The size of the LSO segment is the key metric we will examine in our discussion here.

Simply put, LSO segment size is better (higher) when the thread draining the data can drive as much data as possible. A thread can drive only as much data as is available in the TCP congestion control window. What we need to ensure is that (i) TCP congestion window is large enough, and (ii) Enough data is ready to be transmitted by the draining thread.

It is important to remember that in the Solaris networking stack, packets may be drained by three different threads:

(i) The thread writing to the socket may drain its own and other packets in the squeue.
(ii) The squeue worker thread may drain all the packets in the squeue.
(iii) The thread in the interrupt (or soft ring) may drain the squeue.

The ratio of occurence of these three threads depends on system dynamics. Nevertheless, it is useful to keep in mind these in the context of the discussion below. An easy way to monitor is by checking the stack output count from the following DTrace script.

dtrace -n 'tcp_lsosend_data:entry{@[stack()]=count();}'


Experiments

Our experiment testbed is as follows. We connect a Sun Fire X4440 server (16-core 4-socket AMD Opteron based system) to 15 V20z clients. The X4440 server has PCI-E x8/16 slots. Out of the different possible options for 10 GigE NICs, we chose to use the Myricom 10-GigE PCI-E because it supports native PCI-E along along with hardware LSO (hardware LSO is more CPU efficient). Another option is to use the Sun Multithreaded 10 Gig-E PCI-E card which supports software LSO. LSO is enabled by default in the Myricom 10 GigE driver. LSO may be enabled in the Sun Mutithreaded 10 GigE driver nxge by commenting out the appropriate line in /kernel/drv/nxge.conf

Each client has 15 Broadcom 1 GigE NICs. The clients and the server are connected to an independent VLAN in a Cisco Catalyst 6509 switch. All systems are running Open Solaris.

We use the following Dtrace script to observe LSO segment size. This reports the average size of LSO in bytes every 5 seconds.

bash-3.2#cat tcpaveragesegsize.d
#!/usr/sbin/dtrace -s
/\*
\*/

tcp_lsosend_data:entry
{
@av[0]=avg(arg5);
}

tick-5s {
        printa(@av);
        trunc(@av);
}


Now, let us run a simple experiment. We use uperf to do a throughput test using this profile which will drive as much traffic as possible, writing 64KByes to the socket, using one connection to each client. Now we can run the above dtrace script at the server (X4440) during the run. Here is the output:

Example 1: Throughput profile with 64K writes and one connection per client.
bash-3.2# ./tcpavgsegsize.d

        0            40426

        0            40760

        0            40530
The above numbers are at 5 second intervals. We are doing 40K sized segments per transmit. That is much better than 1 MSS in the traditional network stack.

To demonstrate what helps get better LSO, let us run the same experiment, but with a specweb support oriented profile instead of the throughput profile. In this profile, uperf writes 64 KByte to the socket, and waits for the receiver to send back 64 bytes before it writes again (it emulates a request response pattern of clients requesting large files from a server). Now, if we measure LSO during the run using the same DTrace script, we get:

Example 2: Specweb profile with 64K writes, one connection per client.
bash-3.2# ./tcpavgsegsize.d

        0            62693

        0            58388

        0            60084


Our LSO segment size increased from about 40K to 60K. The specweb support profile ensures that the next batch of writes to a connection occur only after the previous has been read by the client . Since the ACKs of the previous writes are received by that time, the next 64K write sees an empty TCP congestion window, and can indeed drain the 64K bytes. Note that the LSO segment is very near is maximum possible of 64K. Indeed, we can get 64K if we use the above profile with only one client. Here is the output:

Example 3: Specweb profile with a single client
 bash-3.2# ./tcpavgsegsize.d

        0            65536

        0            65524

        0            65524


Now let us move back to the throughput profile. If we reduce our write size in the throughput profile to 1KB instead of 64 KB, we get much worse LSO. With smaller writes, the number of bytes drained by either threads (i), (ii), or (iii) is smaller, leading to smaller LSO. Here is the result.

Example 4: Throughput profile with 1K writes.
bash-3.2# ./tcpavgsegsize.d

        0            11127

        0            10381

        0            10640


Now let us increase the number of connections per client to 2000. This is a bulk throughput workload. So now we have 30000 connections across 15 clients.

Example 5: Throughput profile with 2000 connections per client
bash-3.2# ./tcpavgsegsize.d

        0             5496

        0             5084

        0             5069
Here the LSO segment is smaller because we are limited by the TCP congestion window. With larger number of connections, the per connection TCP congestion window becomes more dependent on clearing of ACKS. Transmits are more ACK-driven than in any other case.

The other factor to also keep in mind is that any out of order segments or dup acks may reduce TCP congestion window. To check for the same use the following commands at server and client respectively:

netstat -s -P tcp 1 | grep tcpInDup
netstat -s -P tcp 1 | grep tcpInUnorderSegs
Ideally, the number of dup acks and out-of-order segments shoulds be as close to 0 as possible.

An interesting exercise would be to monitor the ratio of (i), (ii), and (iii) in each of the above cases. Here is the data.

Example (i) (ii) (iii)
1 12% 0% 88%
2 70% 0% 30%
3 98% 0% 2%
4 74% 1% 24%
5 34% 37% 29%


To summarize, we have noted the following about LSO:

(1) A higher LSO segment size helps improve CPU efficiency with lesser function calls per byte of data sent out.
(2) A request-response profile helps drive larger LSO segment size compared to a throughput oriented profile.
(3) A larger write size (till 64K) helps drive larger LSO segment size.
(4) Smaller number of connections help drive larger LSO segment size.


Since a blog is a good medium for communication both ways, I appreciate comments and suggestions from readers. Please do post them in this forum or email them to me.
About

This blog discusses my work as a performance engineer at Sun Microsystems. It touches upon key topics of performance issues in operating systems and the Solaris Networking stack.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today