Virtual Network - Part 4
By Jeff Victor-Oracle on Mar 01, 2011
Resource ControlsThis is the fourth part of a series of blog entries about Solaris network virtualization. Part 1 introduced network virtualization, Part 2 discussed network resource management capabilities available in Solaris 11 Express, and Part 3 demonstrated the use of virtual NICs and virtual switches.
This entry shows the use of a bandwidth cap on Virtual Network Elements (VNEs). This form of network resource control can effectively limit the amount of bandwidth consumed by a particular stream of packets. In our context, we will restrict the amount of bandwidth that a zone can use.
As a reminder, we have the following network topology, with three zones and
three VNICs, one VNIC per zone.
All three VNICs were created on one ethernet interface in Part 3 of this series.
Capping VNIC BandwidthUsing a T2000 server in a lab environment, we can measure network throughput with the new dlstat(1) command. This command reports various statistics about data links, including the quantity of packets, bytes, interrupts, polls, drops, blocks, and other data. Because I am trying to illustrate the use of commands, not optimize performance, the network workload will be a simple file transfer using ftp(1). This method of measuring network bandwidth is reasonable for this purpose, but says nothing about the performance of this platform. For example, this method reads data from a disk. Some of that data may be cached, but disk performance may impact the network bandwidth measured here. However, we can still achieve the basic goal: demonstrating the effectiveness of a bandwidth cap.
With that background out of the way, first let's check the current status of our links.
GZ# dladm show-link LINK CLASS MTU STATE BRIDGE OVER e1000g0 phys 1500 up -- -- e1000g2 phys 1500 unknown -- -- e1000g1 phys 1500 down -- -- e1000g3 phys 1500 unknown -- -- emp_web1 vnic 1500 up -- e1000g0 emp_app1 vnic 1500 up -- e1000g0 emp_db1 vnic 1500 up -- e1000g0 GZ# dladm show-linkprop emp_app1 LINK PROPERTY PERM VALUE DEFAULT POSSIBLE emp_app1 autopush rw -- -- -- emp_app1 zone rw emp-app -- -- emp_app1 state r- unknown up up,down emp_app1 mtu rw 1500 1500 1500 emp_app1 maxbw rw -- -- -- emp_app1 cpus rw -- -- -- emp_app1 cpus-effective r- 1-9 -- -- emp_app1 pool rw SUNWtmp_emp-app -- -- emp_app1 pool-effective r- SUNWtmp_emp-app -- -- emp_app1 priority rw high high low,medium,high emp_app1 tagmode rw vlanonly vlanonly normal,vlanonly emp_app1 protection rw -- -- mac-nospoof, restricted, ip-nospoof, dhcp-nospoof <some lines deleted>Before setting any bandwidth caps, let's determine the transfer rates between a zone on this system and a remote system.
It's easy to use dlstat to determine the data rate to my home system while transferring a file from a zone:
GZ# dlstat -i 10 e1000g0 LINK IPKTS RBYTES OPKTS OBYTES emp_app1 27.99M 2.11G 54.18M 77.34G emp_app1 83 6.72K 0 0 emp_app1 339 23.73K 1.36K 1.68M emp_app1 1.79K 120.09K 6.78K 8.38M emp_app1 2.27K 153.60K 8.49K 10.50M emp_app1 2.35K 156.27K 8.88K 10.98M emp_app1 2.65K 182.81K 5.09K 6.30M emp_app1 600 44.10K 935 1.15M emp_app1 112 8.43K 0 0The OBYTES column is simply the number of bytes transferred during that data sample. I'll ignore the 1.68MB and 1.15MB data points because the file transfer began and ended during those samples. The average of the other values leads to a bandwidth of 7.6 Mbps (megabits per second), which is typical for my broadband connection.
Let's pretend that we want to constrain the bandwidth consumed by that workload to 2 Mbps. Perhaps we want to leave all of the rest for a higher-priority workload. Perhaps we're an ISP and charge for different levels of available bandwidth. Regardless of the situation, capping bandwidth is easy:
GZ# dladm set-linkprop -p maxbw=2000k emp_app1 GZ# dladm show-linkprop -p maxbw emp__app1 LINK PROPERTY PERM VALUE DEFAULT POSSIBLE emp_app1 maxbw rw 2 -- -- GZ# dlstat -i 20 emp_app1 LINK IPKTS RBYTES OPKTS OBYTES emp_app1 18.21M 1.43G 10.22M 14.56G emp_app1 186 13.98K 0 0 emp_app1 613 51.98K 1.09K 1.34M emp_app1 1.51K 107.85K 3.94K 4.87M emp_app1 1.88K 131.19K 3.12K 3.86M emp_app1 2.07K 143.17K 3.65K 4.51M emp_app1 1.84K 136.03K 3.03K 3.75M emp_app1 2.10K 145.69K 3.70K 4.57M emp_app1 2.24K 154.95K 3.89K 4.81M emp_app1 2.43K 166.01K 4.33K 5.35M emp_app1 2.48K 168.63K 4.29K 5.30M emp_app1 2.36K 164.55K 4.32K 5.34M emp_app1 519 42.91K 643 793.01K emp_app1 200 18.59K 0 0Note that for dladm, the default unit for maxbw is Mbps. The average of the full samples is 1.97 Mbps.
Between zones, the uncapped data rate is higher:
GZ# dladm reset-linkprop -p maxbw emp_app1 GZ# dladm show-linkprop -p maxbw emp_app1 LINK PROPERTY PERM VALUE DEFAULT POSSIBLE emp_app1 maxbw rw -- -- -- GZ# dlstat -i 20 emp_app1 LINK IPKTS RBYTES OPKTS OBYTES emp_app1 20.80M 1.62G 23.36M 33.25G emp_app1 208 16.59K 0 0 emp_app1 24.48K 1.63M 193.94K 277.50M emp_app1 265.68K 17.54M 2.05M 2.93G emp_app1 266.87K 17.62M 2.06M 2.94G emp_app1 255.78K 16.88M 1.98M 2.83G emp_app1 206.20K 13.62M 1.34M 1.92G emp_app1 18.87K 1.25M 79.81K 114.23M emp_app1 246 17.08K 0 0This five year old T2000 can move at least 1.2 Gbps of data, internally, but that took five simultaneous ftp sessions. (A better measurement method, one that doesn't include the limits of disk drives, would yield better results, and newer systems, either x86 or SPARC, have higher internal bandwidth characteristics.) In any case, the maximum data rate is not interesting for our purpose, which is demonstration of the ability to cap that rate.
You can often resolve a network bottleneck while maintaining workload isolation, by moving two separate workloads onto the same system, within separate zones. However, you might choose to limit their bandwidth consumption. Fortunately, the NV tools in Solaris 11 Express enable you to accomplish that:
GZ# dladm set-linkprop -t -p maxbw=25m emp_app1 GZ# dladm show-linkprop -p maxbw emp_app1 LINK PROPERTY PERM VALUE DEFAULT POSSIBLE emp_app1 maxbw rw 25 -- --Note that the change to the bandwidth cap was made while the zone was running, potentially while network traffic was flowing. Also, changes made by dladm are persistent across reboots of Solaris unless you specify a "-t" on command line.
Data moves much more slowly now:
GZ# # dlstat -i 20 emp_app1 LINK IPKTS RBYTES OPKTS OBYTES emp_app1 23.84M 1.82G 46.44M 66.28G emp_app1 192 16.10K 0 0 emp_app1 1.15K 79.21K 5.77K 8.24M emp_app1 18.16K 1.20M 40.24K 57.60M emp_app1 17.99K 1.20M 39.46K 56.48M emp_app1 17.85K 1.19M 39.11K 55.97M emp_app1 17.39K 1.15M 38.16K 54.62M emp_app1 18.02K 1.19M 39.53K 56.58M emp_app1 18.66K 1.24M 39.60K 56.68M emp_app1 18.56K 1.23M 39.24K 56.17M <many lines deleted>The data show an aggregate bandwidth of 24 Mbps.
ConclusionThe network virtualization tools in Solaris 11 Express include various resource controls. The simplest of these is the bandwidth cap, which you can use to effectively limit the amount of bandwidth that a workload can consume. Both physical NICs and virtual NICs may be capped by using this simple method. This also applies to workloads that are in Solaris Zones - both default zones and Solaris 10 Zones which mimic Solaris 10 systems.
Next time we'll explore some other virtual network architectures.