Live migration of a Guest with an SR-IOV VF using dynamic SR-IOV feature!
By user12611315 on Aug 20, 2013
NOTE: This is only an example how Dynamic SR-IOV can be exploited to accomplish Live migration of a Guest with SR-IOV VF.
OVM Server for SPARC 3.1 introduces Dynamic SR-IOV feature which provides the capability to dynamically add and remove SR-IOV Virtual Functions to Logical domains. This is one use case on how Dynamic SR-IOV feature can be combined with rich Solaris IO features and accomplish the Live migration of a Logical domain that has an Ethernet SR-IOV Virtual Function assigned to it. The idea here is to create a multipath configuration in the logical domain with a VF and a Virtual Network(vnet) device so that we can dynamically remove the VF before the live migration and then re-assign the same VF on the target system. The vnet device in the Logical domain provides two functionalities, 1) allows the VF to be dynamically removed from the domain 2) provides communication for the application during the length of Live migration as the VF would be removed at the start of Live migration.
The Solaris IPMP is the best possible multipath configuration that is possible for a VF and Vnet today. We need to configure the Vnet device as a standby device so that the high-performance VF is used for communication when it is available. If you want a VF to be assigned again on the target system, the restriction today is to add a VF with exact same name, that is a VF from same PCIE slot and same VF number as that on the source system. When the same device is seen by the Solaris OS in the Logical domain, it will automatically adds it to the same IPMP group so that no manual intervention is required. As the VF was configured as active device, the IPMP will automatically re-direct the traffic to the VF. The following diagram shows this configuration visually:
The above configuration shows that we can also choose to use the same PF as the backend device for the Virtual Switch and a VF from that PF can be assigned to the Guest domain. That is, both Vnet and VF use the same PF device. Note, you are free to use another NIC as the backend device for the virtual switch too, but this example demonstrates that one doesn't require multiple network ports to be used for using this configuration. The other recommended configuration would be to use another Service domain to host the Virtual switch that way an admin could use the same config to handle the needs of rebooting the Root domain manually. That is, manually remove the VF from the Guest domain dynamically before rebooting the Root domain and assign the VF again dynamically after the Root domain is booted.
The following are the high-level steps configure such configuration.
- A Guest domain named "ldg1" is already created with configuration to meet the Live migration requirements.
- The Guest domain OS support Dynamic SR-IOV, see the OVM Server for SPARC 3.1 Release notes for the OS versions that support Dynamic SR-IOV.
- The Physical Function /SYS/MB/NET0/IOVNET.PF0 for the VFs.
- The network device "net0" on the primary domain maps the PF /SYS/MB/NET0/IOVNET.PF0.
- The desired number of SR-IOV Virtual Functions are already created on the PF /SYS/MB/NET0/IOVNET.PF0. See OVM Server for SPARC 3.1 admin guide for how to create the Virtual Functions. This example uses the VF named /SYS/MB/NET0/IOVNET.PF0.VF0.
- A Virtual Switch(vsw) named "primary-vsw0" is already created on the primary domain.
- Create and add a Vnet device to the Guest domain ldg1.
- # ldm add-vnet vnet0 primary-vsw0 ldg1
- Add a VF to the Guest domain ldg1
- # ldm add-io /SYS/MB/NET0/IOVNET.PF0.VF0 ldg1
- Boot the Guest domain ldg1
- Login to the Guest domain and configure IPMP
- Use "dladm show-phys" command to determine the netX names that Solaris OS assigned to the Vnet and VF devices.
- This example assumes net0 maps to Vnet device and net1 maps to the VF device.
- Configure IPMP configuration. Note, this creates a simple IPMP active/standby configuration, you can choose to create the IPMP configuration based on your network needs. Here the important thing is that Vnet device to be configured as standby device.
- # ipadm create-ip net0
- # ipadm create-ip net1
- # ipadm set-ifprop -p standby=on -m ip net0
- # ipadm create-ipmp -i net0 -i net1 ipmp0
- # ipadm create-addr -T static -a local=<ipaddr>/<netmask> ipmp0
Live migration steps:
- On the source system to live migrate:
- # ldm remove-io /SYS/MB/NET0/IOVNET.PF0.VF0 ldg1
- # ldm migrate -p <password-file> ldg1 root@<target-system>
- On the target system after live migration:
- # ldm add-io /SYS/MB/NET0/IOVNET.PF0.VF0 ldg1
The following youtube video is a demo of live migration with SR-IOV VF in a guest while running the network performance test. The graphs show the traffic switching to Vnet after the VF is removed and then switching back to VF when the VF is added on the target system.