Tuesday Aug 20, 2013

Virtual network performance greatly improved!

With latest OVM Server for SPARC, the virtual network performance has been improved greatly.  We are now able to drive the line rate(9.xGbps) on a 10Gbps NIC and up to 16Gbps for Guest-to-Guest communication. These numbers are achieved with standard MTU(1500), that is no need to use Jumbo Frames to get this performance. This is made possible by introducing support for LSO in our networking stack. The following graphs are from a SPARC T5-2 platform, with 2 cores to Control domain and Guest domains. 

LDoms Virtual network performance graphs

Note: In general for any network device, the performance numbers depends on the type of workload, the above numbers are obtained with iperf workload and a message size of 8k. Note, the interface is configured with standard MTU.for throughput.

These improvements are available in S11.1 SRU9,  the latest SRU is always recommended. The S10 patches with the same improvement will be available very soon. We recommend highly to use S11.1 in the Service domains.

What you need to get this performance? 

  • Install S11.1 SRU9 or later in both Service domain(the domain that hosts LDoms vsw) and the Guest domains. It is important both Service domain and the Guest domains to be updated to get this performance.
    • S10 patches with equivalent performance are also available. The S10 patch 150031-07 is required to be installed in the S10 domain(s). Please contact Oracle support teams for any additional information.
  • Update the latest system Firmware that is available for your platform platform.
    • These performance numbers can only expected on SPARC T4 and newer platforms only.
  • Ensure that the extended-mapin-space is set to on for both Service domain and Guest domains.  
    • Note OVM Server for SPARC 3.1 Software and associated FW sets extended-mapin-space to on by default so that this performance comes out of the box, in any case, confirm if it is set to on all domains.
    • You can check this with the following command:

# ldm ls -l <domain-name> |grep extended-mapin
    •  If the extended-mapin-space is not set to on, you can set it to on with the following commands. Note, the changes to extended-mapin-space will trigger delayed reconfig for primary domain and require a reboot and the Guest domains required to be stopped.
# ldm set-domain extended-mapin-space=on <domain-name>

  •  Ensure there are sufficient CPU and memory resources assigned to both the Service domain and Guest domains. Note to drive 10Gbps or beyond performance a Guest domain need to be configured to be able to drive such performance, we recommend 2 CPU Cores or more and 4GB or more memory to be assigned to each Guest domain. As the service domain is also involved in proxying the data for the Guest domains, it is very important to assign sufficient CPU and memory resources, we recommend 2 CPU Cores or more and 4GB or more memory resources to the Service domain.
  • No jumbo frames configuration is required, that is, this performance improvement will be available for standard MTU(1500) as well. We introduced support for LSO to be able to optimize the performance for standard MTU. In fact, we recommend avoid configuring Jumbo Frames unless you have specific need. 

Saturday Dec 10, 2011

LDoms networking in Solaris 11

Since Oracle Solaris11 is officially released now, I thought I will try explain how LDoms networking  is integrated with all the networking enhancements in S11, mainly with project Crossbow.  The network stack for Oracle Solaris 11 has been substantially re-architected in an effort known as the project Crossbow. One of the main goals of Crossbow is to virtualize the hard NICs into Virtual NICs (VNICs) to provide more effective sharing of networking resources. The VNIC feature allows dividing a physical NIC into multiple virtual interfaces to provide independent network stacks for applications.

LDoms networking in Oracle Solaris11 has been re-designed along with Crossbow to utilize the underlying infrastructure enhancements provided by Crossbow. The following is a high-level view of how LDoms virtual switch in an S11 service domain and LDoms virtual network device in an S11 Guest domain. The diagram also shows an example of an S10 domain to be fully compatible with S11 Service domain. High-level view of LDoms networking in Solaris11

LDoms virtual switch in Solaris11 service domain is now re-designed to be layered on top of Crossbow MAC layer. It is designed to be at the same level of a VNIC. The actual virtual switching is now done at the Crossbow MAC layer, as a result the LDoms virtual switch is fully compatible with VNICs on the same physical NIC. Now, there is specific requirement to plumb LDoms vsw in the service domain to communicate with Guest Domains. LDoms virtual network device driver has been cre-designed to exploit various features such as rings and polling e.t.c.


  • All existing LDoms networking features are fully supported with Solaris11 in both Service domain and Guest domains, this includes:
    • VLANs feature
    • Link based failure detection support
    • NIU Hybrid I/O
    • Jumbo Frames
    • Link Aggregation device as an LDoms virtual switch backend devices.
  • The Guest domains running both Solaris10 and Solaris11 are fully compatible with Solar11 Service domain. 
  • A Guest domain running Solaris11 is fully compatible with a service domain running Solaris10 or Solaris11.
  • The existing LDoms configuration continues to work even if the existing Service domain or Guest domain is re-installed with Solaris11. That is, no need to re-create LDoms configuration.
  • The Crossbow VNIC features such as b/w limit and link priorities are not available for LDoms virtual network(vnet) devices.
  • Creation of VNICs on top of LDoms vsw or vnet devices is not supported.
  • Solaris11 introduces a new IPMP failure detection mechanism known as the transitive probing which helps avoid the requirement of test IP addresses. That is, now the virtualization customers can use transitive probing to detect network failures but not worry about the requirement of large test IP addresses.
  • Solaris11 has a feature known as Auto Vanity Naming that generates simple names  such as net0 to all network devices. When you are creating an LDoms virtual switch, you can either use the vanity name or the actual physical NIC interface name for the net-dev option. The vanity name is preferred, but make sure you are using the right network device.

Known issues: 

  • CR 7087781: First Boot After Adding a Virtual Switch to a service domain may hang. See the Solaris11 Release notes at the following URL for more details and the workaround.
    • http://docs.oracle.com/cd/E23824_01/html/E23811/glmdq.html#gloes
  • Creation of VNICs on LDoms vnet and vsw may succeed without failing the command but the VNICs on vnet or vsw won't communicate. Note, VNICs on top of LDoms vsw and vnet are not supported. 
    • Zones creation Solaris11 may auto create a VNIC on LDoms vnet device which won't function. As a workaround, create a vnet device for each zone in the guest domain and explicitly assign the vnet device to a Zone. If the deployment  requires a large number of vnets, then you may choose to disable inter-vnet-link feature in LDoms to save the LDC resources, there by having the ability to create  a lot more vnets or other virtual devices. NOTE: the ability to disable inter-vnet links is introduced in LDoms2.1.


Raghuram Kothakota


« July 2016