Using uperf to compare network ping-pong rates on Solaris and Linux

uperf is a network performance evaluation tool which is being open sourced by Sun today. Unlike network performance evaluation tools such as ttcp, netperf or iperf which are geared to primarily measuring bulk throughput, uperf is meant to be flexible enough to generate custom network traffic - ie you can specify the characteristics of the network traffic (as seen at the application layer) and evaluate/tune/optimize your system performance for that custom workload.

For more details, see the uperf website.

In this blog, I'm going to show an example of using uperf to compare Solaris vs Linux using the network "ping-pong" rate as the metric of interest.

A "ping-pong" network traffic is characterized by a send(2) followed by a recv(2) sequence at the application. The sizes of the messages sent and received will usually be asymmetric but in this case for simplicity, I'll choose to use identical send and receive sizes (in this case 64 bytes).

The metric of interest here is the peak throughput which is measured as "messages sent and received" per second that can be achieved as we scale the number of connections.

Here's the uperf XML profile to do this:

<?xml version="1.0"?>
<profile name="TCP PingPong">
        <group nthreads="$t">
                <transaction iterations="1">
                     <flowop type="connect" options="remotehost=$h protocol=tcp"/>
                </transaction>
                <transaction duration="30s">
                     <flowop type="read" options="size=64"/>
                     <flowop type="write" options="size=64"/>
                </transaction>
                <transaction iterations="1">
                     <flowop type="disconnect" />
                </transaction>
        </group>
</profile>

To use this profile under uperf:

  1. On the SUT run uperf -s
  2. On the load generator:
    1. Save the above profile in a file called "pingpong.xml". If you use some other name, use that name in the script shown below.
    2. In a [k]sh type shell run this:
(export h=...fill in the SUT hostname or IP address here...
export t=1
while [ $t -lt 4096 ] ; do
    uperf -m pingpong.xml
    t=$((t \\\* 2))
done) | tee /var/tmp/pp.out
grep bge1 /var/tmp/pp.out # change bge1 to the NIC you are using as appropriate

Note that I'm using uperf threads - not processes - so, on Solaris, this is limited to ~4000 threads in one process. If I were using processes, the only reasonable limit would be available memory (and patience).

I used a Sun x4200 as the SUT with Solaris and Linux partitions on it. The load generator was an identical Sun x4200 with Solaris. The results below show that Solaris out of box underperforms Linux but with a tuning to enable "soft rings" that offload network interrupt processing, it can outperform Linux. The test was done using the Broadcom "bge" interface.

Comments:

Looks interesting - to be a bit more precise, netperf isn't just about bulk throughput, it is also about latency. However, it does not directly model user-level packet formats as it seems uperf might. Netperf4 is rather more configurable than netperf2, offering ways to provide lists of message sizes to be sent etc...

rick jones aka mr netperf :)

Posted by rick jones on June 27, 2008 at 05:50 AM PDT #

I prefer "size=6", which frame size is 64.

Posted by Zeeshan on July 07, 2008 at 08:58 PM PDT #

So why isn't the soft ring setting used by default?

Posted by Haik on August 18, 2008 at 09:47 AM PDT #

Rick:
Pre-uperf, we had a harness over netperf that we used almost exclusively for
performance evaluation. It sees pretty wide use - mostly in the QA groups
to check for performance regressions - so: Thanks for a great tool!

And as you point out, netperf can be used for latency as well - I did say "primarily" ;-)

Posted by Charles Suresh on August 18, 2008 at 12:19 PM PDT #

Haik:
There is a latency vs throughput tradeoff in enabling soft rings.
FWIW, there is a bug against this issue, if you are interested:
6621217 Need more soft rings for 10GbE by default pre-Crossbow for OOBP

Posted by Charles Suresh on August 18, 2008 at 12:21 PM PDT #

Charles,

do you have the test results with the nxge driver + NIC ?

Thanks
Jose

Posted by Jose Luu on May 14, 2009 at 07:18 PM PDT #

Post a Comment:
  • HTML Syntax: NOT allowed
About

Charles Suresh

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today