X

News, tips, partners, and perspectives for the Oracle Solaris operating system

Measuring Network Bandwidth Using iperf

Giri Mandalika
Principal Software Engineer





iperf is a simple, open source tool to measure the network bandwidth. It can test TCP or UDP throughput. Tools like iperf are useful to check the performance of a network real quick, by comparing the achieved bandwidth with the expectation. The example in this blog post is from a Solaris system, but the instructions and testing methodology are applicable on all supported platforms including Linux.

Download the source code from iperf's home page, and build the iperf binary. Those running Solaris 10 or later, can download the pre-built binary (file size: 245K) from this location to give it a quick try (right click and "Save Link As .." or similar option).

Testing methodology:

iperf's network performance measurements are based on the client-server communication model - hence requires establishing both a server and a client. The same iperf binary can be used to run the process in server and client modes.


  1. Start iperf in server mode
    iperf -s -i <interval>

    Option -s or --server starts the process in server mode. -i or --interval is the sampling interval in seconds.

  2. Start iperf in client mode, and test the network connection between client and the server with arbitrary data transfers.

    iperf -n <bytes> -i <interval> -c <ServerIP>

    Option -c or --client starts the process in client mode. Option -n or --bytes specify the number of bytes to transmit in bytes, KB (use suffix K) or MB (use suffix M). -i or --interval is the sampling interval in seconds. The last option is the IP address or the hostname of the server to connect to. By default, client connects to the server using TCP. -u or --udp switches to UDP.

  3. Check the network link speed on server and client, and compare the throughput achieved.

Check the man page out for the full list of options supported by iperf in client and server modes.

Here is a simple demonstration.

On server node:

iperfserv% dladm show-phys net0
LINK MEDIA STATE SPEED DUPLEX DEVICE
net0 Ethernet up 1000 full igb0
iperfserv% ifconfig net0 | grep inet
inet 10.129.193.63 netmask ffffff00 broadcast 10.129.193.255
iperfserv% ./iperf -v
iperf version 3.0-BETA5 (28 March 2013)SunOS iperfserv 5.11 11.1 sun4v sparc sun4v
iperfserv% ./iperf -s -i 1
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------


On client node:

client% dladm show-phys net0
LINK MEDIA STATE SPEED DUPLEX DEVICE
net0 Ethernet up 1000 full igb0
client% ifconfig net0 | grep inet
inet 10.129.193.151 netmask ffffff00 broadcast 10.129.193.255
client% ./iperf -n 1024M -i 1 -c 10.129.193.63
Connecting to host 10.129.193.63, port 5201
[ 4] local 10.129.193.151 port 63507 connected to 10.129.193.63 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.01 sec 105 MBytes 875 Mbits/sec
[ 4] 1.01-2.02 sec 112 MBytes 934 Mbits/sec
[ 4] 2.02-3.00 sec 110 MBytes 934 Mbits/sec

[...]
[ 4] 8.02-9.01 sec 110 MBytes 933 Mbits/sec
[ 4] 9.01-9.27 sec 30.0 MBytes 934 Mbits/sec
[ ID] Interval Transfer Bandwidth
Sent
[ 4] 0.00-9.27 sec 1.00 GBytes 927 Mbits/sec
Received
[ 4] 0.00-9.27 sec 1.00 GBytes 927 Mbits/sec
iperf Done.


At the same time, somewhat similar messages are written to stdout on the server node.

iperfserv% ./iperf  -s -i 1
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 10.129.193.151, port 33457
[ 5] local 10.129.193.63 port 5201 connected to 10.129.193.151 port 63507
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-1.00 sec 104 MBytes 874 Mbits/sec
[ 5] 1.00-2.00 sec 111 MBytes 934 Mbits/sec
[ 5] 2.00-3.00 sec 111 MBytes 934 Mbits/sec

[...]
[ ID] Interval Transfer Bandwidth
Sent
[ 5] 0.00-9.28 sec 1.00 GBytes 927 Mbits/sec
Received
[ 5] 0.00-9.28 sec 1.00 GBytes 927 Mbits/sec
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------

The link speed is specified in Mbps (megabit per second). In the above example, the network link is operating at 1000 Mbps speed, and the achieved bandwidth is 927 Mbps, which is 92.7% of the advertised bandwidth.

Notes:


  • It is not necessary to execute iperf in client and server modes as root or privileged user
  • In server mode, iperf uses port 5201 by default. It can be changed to something else using -p or --port option
  • Restart iperf server after each client test to get reliable, consistent results
  • Using iperf is just one of many ways to measure the network bandwidth. There are other tools such as uperf, ttcp, netperf, bwping, udpmon, tcpmon, .. just to name a few. Research and pick the one that best suits your requirement.


Join the discussion

Comments ( 3 )
  • guest Thursday, January 9, 2014

    Hi Giri,

    Do you have Oracle Sun Servers consolidation ratio details sheet or M vales or tools like we have in IBM (Rperf) for T4/T5/M5/M6's Servers against Legacy Servers to size new solution?

    Thanks in Advance

    PK


  • guest Thursday, February 6, 2014

    how to set link speed in tcp mode?


  • guest Monday, March 17, 2014

    Hey Giri,

    I am new to traffic control, and am currently working on a bash script that throttles the bandwidth of a local server/client pair. I have been using iperf to test my script, and I've noticed that the measured results are always _slightly_ different than my mandated values, the server results are always low, and the client's are high. For instance, if I set the bandwidth to 8Mbit/s, the server shows 7.62Mbit/s, while the client shows 8.69Mbit/s. This result is also seen when I leave the connection unmodified, and when I switch the server and client, so I doubt it is a problem with my methods or hardware.

    Is this sort of inconsistency inherent to iperf? Any suggestions on how to fix this problem, or alternative bandwidth monitoring tools would be greatly appreciated!

    - Ian


Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.Captcha
Oracle

Integrated Cloud Applications & Platform Services