Saturday Dec 21, 2013

Measuring Network Bandwidth Using iperf

iperf is a simple, open source tool to measure the network bandwidth. It can test TCP or UDP throughput. Tools like iperf are useful to check the performance of a network real quick, by comparing the achieved bandwidth with the expectation. The example in this blog post is from a Solaris system, but the instructions and testing methodology are applicable on all supported platforms including Linux.

Download the source code from iperf's home page, and build the iperf binary. Those running Solaris 10 or later, can download the pre-built binary (file size: 245K) from this location to give it a quick try (right click and "Save Link As .." or similar option).

Testing methodology:

iperf's network performance measurements are based on the client-server communication model - hence requires establishing both a server and a client. The same iperf binary can be used to run the process in server and client modes.

  1. Start iperf in server mode
    iperf -s -i <interval>

    Option -s or --server starts the process in server mode. -i or --interval is the sampling interval in seconds.

  2. Start iperf in client mode, and test the network connection between client and the server with arbitrary data transfers.

    iperf -n <bytes> -i <interval> -c <ServerIP>
    

    Option -c or --client starts the process in client mode. Option -n or --bytes specify the number of bytes to transmit in bytes, KB (use suffix K) or MB (use suffix M). -i or --interval is the sampling interval in seconds. The last option is the IP address or the hostname of the server to connect to. By default, client connects to the server using TCP. -u or --udp switches to UDP.

  3. Check the network link speed on server and client, and compare the throughput achieved.

Check the man page out for the full list of options supported by iperf in client and server modes.

Here is a simple demonstration.

On server node:

iperfserv% dladm show-phys net0
LINK              MEDIA                STATE      SPEED  DUPLEX    DEVICE
net0              Ethernet             up         1000   full      igb0

iperfserv% ifconfig net0 | grep inet
        inet 10.129.193.63 netmask ffffff00 broadcast 10.129.193.255

iperfserv% ./iperf -v
iperf version 3.0-BETA5 (28 March 2013)SunOS iperfserv 5.11 11.1 sun4v sparc sun4v


iperfserv% ./iperf -s -i 1
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------

On client node:

client% dladm show-phys net0
LINK              MEDIA                STATE      SPEED  DUPLEX    DEVICE
net0              Ethernet             up         1000   full      igb0

client% ifconfig net0 | grep inet
        inet 10.129.193.151 netmask ffffff00 broadcast 10.129.193.255

client% ./iperf  -n 1024M  -i 1 -c 10.129.193.63
Connecting to host 10.129.193.63, port 5201
[  4] local 10.129.193.151 port 63507 connected to 10.129.193.63 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.01   sec   105 MBytes   875 Mbits/sec
[  4]   1.01-2.02   sec   112 MBytes   934 Mbits/sec
[  4]   2.02-3.00   sec   110 MBytes   934 Mbits/sec
			[...]
[  4]   8.02-9.01   sec   110 MBytes   933 Mbits/sec
[  4]   9.01-9.27   sec  30.0 MBytes   934 Mbits/sec
[ ID] Interval           Transfer     Bandwidth
      Sent
[  4]   0.00-9.27   sec  1.00 GBytes   927 Mbits/sec
      Received
[  4]   0.00-9.27   sec  1.00 GBytes   927 Mbits/sec

iperf Done.

At the same time, somewhat similar messages are written to stdout on the server node.

iperfserv% ./iperf  -s -i 1
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 10.129.193.151, port 33457
[  5] local 10.129.193.63 port 5201 connected to 10.129.193.151 port 63507
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-1.00   sec   104 MBytes   874 Mbits/sec
[  5]   1.00-2.00   sec   111 MBytes   934 Mbits/sec
[  5]   2.00-3.00   sec   111 MBytes   934 Mbits/sec
			[...]
[ ID] Interval           Transfer     Bandwidth
      Sent
[  5]   0.00-9.28   sec  1.00 GBytes   927 Mbits/sec
      Received
[  5]   0.00-9.28   sec  1.00 GBytes   927 Mbits/sec
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------

The link speed is specified in Mbps (megabit per second). In the above example, the network link is operating at 1000 Mbps speed, and the achieved bandwidth is 927 Mbps, which is 92.7% of the advertised bandwidth.

Notes:

  • It is not necessary to execute iperf in client and server modes as root or privileged user
  • In server mode, iperf uses port 5201 by default. It can be changed to something else using -p or --port option
  • Restart iperf server after each client test to get reliable, consistent results
  • Using iperf is just one of many ways to measure the network bandwidth. There are other tools such as uperf, ttcp, netperf, bwping, udpmon, tcpmon, .. just to name a few. Research and pick the one that best suits your requirement.

Monday Apr 06, 2009

Controlling [Virtual] Network Interfaces in a Non-Global Solaris Zone

In the software world, some tools like SAP NetWeaver's Adaptive Computing Controller (ACC) require full control over a network interface, so they can bring up/down the NICs at their will to fulfill their responsibilities. Those tools may function normally on Solaris 10 [and later] as long as they are run in the global zone. However there might be some trouble when those tools are attempted to run in a non-global zone, especially on machines with only one physical network interface installed, and when the non-global zones are created with the default configuration. This blog post attempts to suggest few solutions to get around those issues, so the tools can function the way they normally do in the global zone.

If the machine has only one NIC installed, there are at least two issues that will prevent tools like ACC from working in a non-global zone.

  1. Since there is only one network interface on the system, it is not possible to dedicate that interface to the non-global zone where ACC is supposed to run. Hence all the zones, including the global zone, must share the physical network interface.
  2. When the physical network interface is being shared across multiple zones, it is not possible to plumb/unplumb the network interface from a Shared-IP Non-Global Zone. Only the root users in the global zone can plumb/unplumb the lone physical network interface.
    • When a non-global zone is created with the default configuration, Shared-IP zone is created by default. Shared-IP zones have separate IP addresses, but share the IP routing configuration with the global zone.

Fortunately, Solaris 10 has a solution to the aforementioned issues in the form of Network Virtualization. Crossbow is the code name for network virtualization in Solaris. Crossbow provides the necessary building blocks to virtualize a single physical network interface into multiple virtual network interfaces (VNICs) - so the solution to the issue at hand is to create a virtual network interface, and then to create an Exclusive-IP Non-Global Zone using the virtual NIC. Rest of the blog post demonstrates the simple steps to create a VNIC, and to configure a non-global zone as Exclusive-IP Zone.

Create a Virtual Network Interface using Crossbow

  • Make sure the OS has Crossbow functionality
    
    global# cat /etc/release
                     Solaris Express Community Edition snv_111 SPARC
               Copyright 2009 Sun Microsystems, Inc.  All Rights Reserved.
                            Use is subject to license terms.
                                 Assembled 23 March 2009
    
    

    Crossbow has been integrated into Solaris Express Community Edition (Nevada) build 105 - hence all Nevada builds starting with build 105 will have the Crossbow functionality. OpenSolaris 2009.06 and the next major update to Solaris 10 are expected to have the support for network virtualization out-of-the-box.

  • Check the existing zones and the available physical and virtual network interfaces.
    
    global# zoneadm list -cv
      ID NAME             STATUS     PATH                           BRAND    IP    
       0 global           running    /                              native   shared
    
    global# dladm show-link
    LINK        CLASS    MTU    STATE    OVER
    e1000g0     phys     1500   up       --
    
    

    In this example, there is only one NIC, e1000g0, on the server; and there are no non-global zones installed.

  • Create a virtual network interface based on device e1000g0 with an automatically generated MAC address. If the NIC has factory MAC addresses available, one of them will be used. Otherwise, a random address is selected. The auto mode is the default action if none is specified.
    
    global# dladm create-vnic -l e1000g0 vnic1
    
    
  • Check the available network interfaces one more time. Now you should be able to see the newly created virtual NIC in addition to the existing physical network interface. It is also possible to list only the virtual NICs.
    
    global# dladm show-link
    LINK        CLASS    MTU    STATE    OVER
    e1000g0     phys     1500   up       --
    vnic1       vnic     1500   up       e1000g0
    
    global# dladm show-vnic
    LINK         OVER         SPEED  MACADDRESS           MACADDRTYPE         VID
    vnic1        e1000g0      1000   2:8:20:32:9:10       random              0
    
    

Create a Non-Global Zone with the VNIC

  • Create an Exclusive-IP Non-Global Zone with the newly created VNIC being the primary network interface.
    
    global # mkdir -p /export/zones/sapacc
    global # chmod 700 /export/zones/sapacc
    
    global # zonecfg -z sapacc
    sapacc: No such zone configured
    Use 'create' to begin configuring a new zone.
    zonecfg:sapacc> create
    zonecfg:sapacc> set zonepath=/export/zones/sapacc
    zonecfg:sapacc> set autoboot=false
    zonecfg:sapacc> set ip-type=exclusive
    zonecfg:sapacc> add net
    zonecfg:sapacc:net> set physical=vnic1
    zonecfg:sapacc:net> end
    zonecfg:sapacc> verify
    zonecfg:sapacc> commit
    zonecfg:sapacc> exit
    
    global # zoneadm -z sapacc install
    
    global # zoneadm -z sapacc boot
    
    global #  zoneadm list -cv
      ID NAME             STATUS     PATH                           BRAND    IP    
       0 global           running    /                              native   shared
       1 sapacc           running    /export/zones/sapacc        	native   excl
    
    
  • Configure the new non-global zone including the IP address and the network services
    
    global # zlogin -C -e [ sapacc
    ...
    
      > Confirm the following information.  If it is correct, press F2;             
        to change any information, press F4.                                        
                                                                                    
                                                                                    
                      Host name: sap-zone2
                     IP address: 10.6.227.134                                       
        System part of a subnet: Yes                                                
                        Netmask: 255.255.255.0                                      
                    Enable IPv6: No                                                 
                  Default Route: Detect one upon reboot                             
    
    
  • Inside the non-global zone, check the status of the VNIC and the status of the network service
    
    local# hostname
    sap-zone2
    
    local# zonename
    sapacc
    
    local# ifconfig -a
    lo0: flags=2001000849 mtu 8232 index 1
            inet 127.0.0.1 netmask ff000000 
    vnic1: flags=1000843 mtu 1500 index 2
            inet 10.6.227.134 netmask ffffff00 broadcast 10.6.227.255
            ether 2:8:20:32:9:10 
    lo0: flags=2002000849 mtu 8252 index 1
            inet6 ::1/128 
    
    local# svcs svc:/network/physical
    STATE          STIME    FMRI
    disabled       13:02:18 svc:/network/physical:nwam
    online         13:02:24 svc:/network/physical:default
    
    
  • Check the network connectivity.

    From inside the non-global zone to the outside world:

    
    local# ping -s sap29
    PING sap29: 56 data bytes
    64 bytes from sap29 (10.6.227.177): icmp_seq=0. time=0.680 ms
    64 bytes from sap29 (10.6.227.177): icmp_seq=1. time=0.452 ms
    64 bytes from sap29 (10.6.227.177): icmp_seq=2. time=0.561 ms
    64 bytes from sap29 (10.6.227.177): icmp_seq=3. time=0.616 ms
    \^C
    ----sap29 PING Statistics----
    4 packets transmitted, 4 packets received, 0% packet loss
    round-trip (ms)  min/avg/max/stddev = 0.452/0.577/0.680/0.097
    
    
    From the outside world to the non-global zone:
    
    remotehostonWAN# telnet sap-zone2
    Trying 10.6.227.134...
    Connected to sap-zone2.sun.com.
    Escape character is '\^]'.
    login: test
    Password: 
    Sun Microsystems Inc.   SunOS 5.11      snv_111 November 2008
    
    -bash-3.2$ /usr/sbin/ifconfig -a
    lo0: flags=2001000849 mtu 8232 index 1
            inet 127.0.0.1 netmask ff000000 
    vnic1: flags=1000843 mtu 1500 index 2
            inet 10.6.227.134 netmask ffffff00 broadcast 10.6.227.255
    lo0: flags=2002000849 mtu 8252 index 1
            inet6 ::1/128 
    -bash-3.2$ exit
    logout
    Connection to sap-zone2 closed.
    
    

Dynamic [Re]Configuration of the [Virtual] Network Interface in a Non-Global Zone

  • Finally try plumbing down/up the virtual network interface inside the Exclusive-IP Non-Global Zone
    
    global # zlogin -C -e [ sapacc
    [Connected to zone 'sapacc' console]
    ..
    
    zoneconsole# ifconfig vnic1 unplumb
    
    zoneconsole# /usr/sbin/ifconfig -a
    lo0: flags=2001000849 mtu 8232 index 1
            inet 127.0.0.1 netmask ff000000
    
    zoneconsole# ifconfig vnic1 plumb
    
    zoneconsole# ifconfig vnic1 10.6.227.134 netmask 255.255.255.0 up
    
    zoneconsole# /usr/sbin/ifconfig -a
    lo0: flags=2001000849 mtu 8232 index 1
            inet 127.0.0.1 netmask ff000000
    vnic1: flags=1000843 mtu 1500 index 2
            inet 10.6.227.134 netmask ffffff00 broadcast 10.6.227.255
    lo0: flags=2002000849 mtu 8252 index 1
            inet6 ::1/128
    
    

As simple as that! Before we conclude, be informed that prior to Crossbow, Solaris system administrators were required to use Virtual Local Area Networks (VLAN) to achieve similar outcomes.

Check Zones and Containers FAQ, if you are stuck with a strange situation or if you need some interesting ideas around virtualization on Solaris.

About

Benchmark announcements, HOW-TOs, Tips and Troubleshooting

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today