Live migration in Oracle VM Server SPARC 2.1

As I mentioned yesterday , Oracle VM Server SPARC now supports "live migration". This is described in Oracle's Virtualization blog, and in more detail in Liam Merwick's blog. Liam is one of live migration's developers, so his comments are authoritative.

Live migration has been an eagerly anticipated addition to Oracle VM Server for SPARC, so I immediately upgraded a pair of lab machines to OVMSS 2.1 and started experimenting. The machines I used are a T5120 and T5220, with T2 chips running at 1.2GHz (actually 1165MHz, but why quibble). Not the fastest or most recent T-series servers, but more than adequate for the test.

Cold, Warm and Live Migration defined

First, a review of domain migration. A domain (also called virtual machine) is moved from one host server (the "source" system) to another (the "target"). Both source and target systems require access to common network infrastructure and to disk resources used by the guest virtual machine. Virtual disks are typically hosted on a SAN or via NFS.

Oracle VM Server for SPARC offers the following types of domain migration:

  • cold migration - the domain is not currently running on the source system. Cold migration is almost instantaneous, since only metadata is moved. After migration the domain is defined on the target, but remains inactive.
  • warm migration - the domain is running on the source system. The LDoms manager on the source system contacts the target system to start the migration and ensure the target system matches chip type and has resources to host the domain. It then suspends the guest's operation, compresses and encrypts its state information (mostly RAM) using the T-series crypto acceleration, and transmits it to the target. The LDoms manager on the target decompresses and decrypts the contents and resumes domain operation. The domain can be unresponsive for minutes (depending on memory size and network speed). but picks up from where it was.
  • live migration - is similar to warm migration, except that the guest is not suspended during transmission. Instead, the LDoms manager keeps track of memory changed while state is being transmitted, and then makes followup passes to retransmit data changed state information. A brief pause is used at the end to transmit residual changed state. This method is typical for virtual machine systems that provide live guest migration.

In all cases, the LDoms managers on source and target machines cooperate to migrate the domain. The domain is defined on the source system before migration, and is defined on the target system afterwards with the same state (actively running or not) and same identity and resources as before. Also, the same syntax is used in each case, eg: ldm migrate mydomain <othersystem>. The LDoms manager performs cold migration if the domain is inactive, and uses live migration (on 2.1 systems) or warm migration otherwise. Note that cold and warm migration have been available since 2008.

There and back again

I defined a guest domain called "rover" (perhaps I should have named it "bilbo") with 8 CPUs and 512MB of RAM (later set to 2GB). I migrated it back and forth between the two machines, which took about 15 seconds each time, plus or minus 1 second. Later, I increased the domain's memory to 2GB of RAM, and migrated it while it was doing a 'zpool scrub' of a mirrored ZFS root pool. That took about 25 seconds. The larger the memory image, and the more memory changes during migration, then the longer migration will take.

The really good news is that the guest was responsive almost the entire time in all tests. I was logged onto rover via ssh and it felt "normal" at the keyboard, so the unresponsive period was too short to detect that way. I sent a stream of pings from other hosts to the migrating domain, and really couldn't tell anything there either: ping times rose slightly but no pings were dropped.

Finally, I launched an X-windows graphics demo /usr/X/demo/muncher& and watched it run. This X windows program continuously draws pretty pixels on its windows. Sure enough, it briefly paused at the very end of the migration.

In this screen shot you can see a live migration in progress. I've previously migrated rover from the system in the top left window (note the output from ldm list and ldm migrate) and now am migrating it back again from the system on the right. You can observe graphical representation of CPU consumption: the source system is able to almost saturate 16 CPUs while compressing and transmitting domain contents in parallel; it takes far less CPU power on the receiving side. I have a terminal window open on rover with a small shell script that repeatedly sleeps two seconds and displays the current time. In a separate window is the X-windows demo

Use cases and alternatives

Domain or virtual machine migration (regardless of vendor) is not the answer to all IT issues. For example, it doesn't provide disaster recovery or non-disruptive high availability (if the server or site hosting a domain is down, you can't initiate migration). For that, you need a true high availability solution, such as Oracle Solaris Cluster or Oracle Real Application Clusters (RAC).

Oracle VM Server for SPARC lets you provision redundant I/O and multiple service domains. That makes it possible to perform non-disruptive "rolling upgrades" within a single box: you can take a service domain down to update its OS version while guest I/O continues from other service domains. After the first service domain is upgraded, you can update other domains in like manner without guest domain outages or interruption of service. This is not possible with systems that use a monolithic hypervisor and must take an outage to update it. On those platforms, virtual machine migration is the only way to upgrade without disruption. That said, live migration certainly adds a useful option for upgrading and servicing logical domains systems in conjunction or instead of configuring same-box redundancy.


Oracle VM Server for SPARC 2.1 can now migrate running guest domains between servers with little delay and with extremely short suspension periods.

Domain migration can now be used in a wider set of purposes. Possible use cases include migrating a domain to free up resources on the source machine for other domains, consolidating domains onto a smaller set of servers during low use periods in order to reduce power consumption, or migrating domains off a server that is being taken out of service for maintenance purposes or to be decommissioned.

Next blog entry: using DTrace to observe domain migration!

EDIT: I'm not the "principal engineer at Oracle responsible for the Sparc hypervisor" as The Register just said here but thanks for the notice anyway! I agree with them that LDoms is one of the most clever things to come down the pike in a long time. It's a very significant innovation in server virtualization.


Can you describe how your storage was set up? Is your data on NAS or SAN? I assume some sort of shared storage is required to move the data between the machines?

Posted by guest on June 17, 2011 at 10:09 AM MST #

You're quite right - you have to have some sort of shared storage available to the source and target machines, and the guest uses the exact same disk resources for its virtual disks before and after migration.

In my case, I used NAS, with a simple NFS mount to a server available to both machines. The virtual disk is set up the same as any other virtual disk with a file based back-end. I did it this way because it was easy for me: I had the NFS server and it was quick to set up. In production where performance matters, I would probably use a SAN or an iSCSI shared disk.

regards, Jeff

Posted by guest on June 17, 2011 at 11:49 AM MST #

Post a Comment:
Comments are closed for this entry.



« March 2015