Tuesday Aug 20, 2013

Cross-CPU migration in LDoms 3.1

Oracle VM Server for SPARC (aka LDoms) 3.1 is now available. You can read all about it over at the Virtualization Blog. I want to focus on the enhancements we've made to cross-CPU migration.

Cross-CPU migration is the ability to migrate a running virtual machine (VM) from a SPARC server with one type of CPU chip to another of a different type. Prior to LDoms 3.1, the only way to perform such a migration was to specify a "cpu-arch" property of "generic" when first defining the VM. Doing so allowed VMs to be migrated between systems based on the T2, T2+, T3 and T4 processors. However, specifying this property results in the VM running without taking advantage of all the features of the underlying CPU, negatively impacting performance. The most serious impact was loss of hardware-based crypto acceleration on the T4 processors. This was unfortunately necessary, due to major changes in HW crypto acceleration introduced in T4.

With the release of our T5 and M5 processors and systems, we didn't want our customers to continue to suffer this penalty when performing cross-cpu migrations amongst these newer platforms. In order to accomplish that, we have defined a new migration "class" (i.e. another value for the "cpu-arch" property), named "migration-class1". With LDoms 3.1, you can now specify "cpu-arch=migration-class1" on your T4, T5, and M5 based systems, allowing those VMs to be migrated across systems based on those 3 processor families. And most important of all, your VMs will run with full HW crypto capabilities (not to mention supporting the larger page sizes available in these processors)!

Ideally we would have provided this new migration class at the time T5 and M5 were first made available, but for various reasons that was not feasible. It is only available now, with LDoms 3.1. And as with any new LDoms release, there are minimum Solaris and firmware versions required to get all the features of the release; please consult the relevant documentation for details.

Note that the "migration-class1" family does _not_ include the T2, T2+ or T3 processors. To perform cross-CPU migration between those processor families (or between them and T4), you must continue to use the "generic" migration class. In addition, live migrations from T2/T2+/T3 systems to T5/M5 systems (and vice-versa) are not supported. Now, you probably noticed that T4 is in both the "generic" and "migration-class1" families, so you might be tempted to migrate from (say) T2 to T5 in 2 hops, T2->T4 then T4->T5. The problem with that approach is that you'll need to switch the migration class of the VM after it is migrated to the T4 platform. Doing so requires the VM to be restarted, negating the benefit of live migration. For this reason, we recommend cold migration in this scenario.

Friday Sep 28, 2012

Oracle Open World 2012

I'll be at Oracle Open World 2012 next week in San Francisco. I'm presenting in a session entitled "What’s New with Oracle VM Server for x86 and SPARC Architectures: A Technical Deep Dive", along with Adam Hawley. We'll be talking about Oracle's overall virtualization strategy, what's new with Oracle server virtualization on both x86 and SPARC, as well as an update on Oracle's virtualization management capabilities. The session runs from 11:45am to 12:45pm on Wednesday 10/3, in Moscone South - room 252.

You can also find me at the Oracle VM Server for SPARC booth on Monday morning and Tuesday afternoon to showcase some pretty cool upcoming features for SPARC virtualization.

And if you're there early, you might catch me at the Software Deployment with Oracle VM Templates booth on Sunday afternoon.

It promises to be jam-packed and informative week!

Thursday May 24, 2012

Oracle VM Server for SPARC 2.2: Improved whole-core allocation

We've just released Oracle VM Server for SPARC 2.2! This release contains some major new features, along with numerous other improvements and (of course) bug fixes. I'll only briefly highlight the key new features here; visit Oracle's Virtualization Blog for more information on this release, including where to download the software, getting access the documentation, and providing pointers to blogs describing other features in the release. In this post, I'm going to focus on the improvements we've made in configuring your VMs using whole-core allocation.

First, the major new features:


  • Cross-CPU live migration - Now migrate between T-series systems of different CPU types & frequencies
  • Single-root I/O virtualization (SR-IOV) on Oracle's SPARC T3 and SPARC T4 platforms
  • Improved virtual networking performance through explicit configuration of extended mapin space

In addition, 2.2 includes two features first introduced via updates to 2.1: dynamic threading on the SPARC T4 processor, and support for the SPARC SuperCluster platform.


Whole core allocation improvements

In Oracle VM Server for SPARC 2.0, we introduced the concept of whole-core allocation, allowing users to optionally allocate CPU resources at the core, rather than strand, granularity. This feature was designed primarily to allow customers to be in compliance with Oracle's "Hard Partitions" CPU licensing requirements. As such, it included the following restrictions:


  • Existence and enforcement of a cap on the number of cores that could be allocated to a domain. This cap was set implicitly the first time whole-core allocation was requested for a domain.
  • Severe restrictions on when & how whole-core allocation mode can be entered & exited, and when the cap can be modified. In most cases, the domain needed to be stopped and unbound before effecting these changes.

Since introducing this capability, we've heard from several customers who want to use whole-core allocation, but for the purposes of improved performance and manageability, not for complying with the hard-partitioning licensing requirement. They found the existing CLI semantics untenable for their needs. Particular concerns raised included:


  • Inability to configure whole-core without a cap
  • Difficulty in changing the value of the cap
  • The requirement to stop & unbind the domain to enable/disable whole-core allocations

In response to these issues, we've enhanced the CLI options for managing whole-core allocations in this release. These new options allow you to enable whole-core allocation granularity without the onerous restrictions necessitated by the hard-partitioning CPU licensing requirements. Here's what we did:


  • Provided new, separate CLI options for enabling whole-core allocation and for adding/setting/removing a cap
  • Removed the restrictions on when you can switch in and out of whole-core mode, as long as no cap is set

Customers can continue to use the original whole-core CLI we introduced in 2.0 to comply with the hard partitioning requirements (we've not changed the semantics of those operations at all). However, we do recommend everyone migrate to these new CLI options, as they're clearer, more flexible, and suitable whether you're looking for whole-core allocation to meet the hard-partitioning requirement or solely for the other benefits of allocating cpu resources at the whole-core granularity. The key determining factor is whether or not you configure a cap on the number of cores in the domain; if you do, you're in compliance with the hard-partitioning licensing requirements, and incur the restrictions outlined above. If you configure whole-core allocation without a cap, we assume you're not interested in compliance with those terms, and are not bound by its restrictions.

Here is a brief synopsis of the new CLI options. For full details consult the Oracle VM Server for SPARC 2.2 Reference Manual (or the ldm(1M) man page if you have 2.2 installed).

To configure whole-core allocation with no cap:

ldm add-core num ldom
ldm set-core num ldom
ldm remove-core [-f] num ldom

To configure a cap for a domain separately from allocating cores to the domain:

ldm add-domain ... [max-cores=[num|unlimited]] ldom
ldm set-domain ... [max-cores=[num|unlimited]] ldom

Saturday Oct 01, 2011

Dynamic Threading in the SPARC T4 Processor

As mentioned in my last entry, we've made some specific enhancements to Oracle VM Server for SPARC for the T4 family of systems. One of these enhancements is the addition of dynamic CPU threading controls. These controls leverage the dynamic threading feature of the T4 processor to further optimize the single-threaded performance for CPU-bound workloads. This is accomplished by limiting the number of active threads per core, allowing more of the hardware resources of the core to be allocated to the remaining threads. This maximizes the instructions-per-cycle (IPC) available for use. For much more info, read the white paper we just released.

Monday Sep 26, 2011

SPARC SuperCluster T4-4 Announced

Today we announced the all-new SPARC SuperCluster T4-4 engineered system. We're excited about all the new SPARC T4 Processor based systems, with the new core design of the T4 delivering much better single-thread performance than any other processor in the family.

But the incredible performance characteristics of the SPARC SuperCluster T4-4 in particular are made possible through enhancements up & down the hardware & software stacks. This extends to virtualization of course. Oracle VM Server for SPARC is a key part of the system, and we have been focusing some of our engineering effort on specific enhancements for the SPARC SuperCluster system.

You can learn much more about the new capabilities of the SPARC SuperCluster engineered system, along with the rest of the T4-based systems, and some other exciting news about Oracle VM Server for SPARC, all at Oracle OpenWorld 2011 next week!

Wednesday Jun 08, 2011

Oracle VM Server for SPARC 2.1 Released!

I only have time for a quick post this morning, but we're very excited to announce the release of Oracle VM Server for SPARC 2.1. This release contains one of our most requested features: live migration! There will be plenty of folks blogging & describing that feature in detail; in future blog posts I'm going to highlight some of the other lesser-known features of the release.

More to come!

Thursday May 19, 2011

We're live on http://blogs.oracle.com

Just a quick note to let you know that the Sun & Oracle blogging infrastructures have now been combined at blogs.oracle.com. For now, links to blogs.sun.com will redirect, but it'd be wise to update your URLs now.

You'll also note a different look to my blog; the previous template I had been using at Sun is no longer available.

Monday Feb 07, 2011

New security white paper for Oracle VM Server for SPARC

A new technical white paper entitled Secure Deployment of Oracle VM Server for SPARC has just been published and is available on the Oracle Technology Network.

You can also read more about it on Oracle's Virtualization Blog. The specific article is here.

BTW, both the Oracle Technology Network and Oracle's Virtualization Blog are very useful resources for learning more about all of Oracle's virtualization offerings.

Thursday Nov 18, 2010

Oracle VM Server for SPARC 2.0 available on T2 & T2 Plus Servers

As I mentioned last month, Oracle VM Server for SPARC 2.0 was initially qualified on only a limited number of T3-based systems. I'm happy to announce that we've now completed qualification of this latest release on our UltraSPARC T2 and T2 Plus platforms.

In order to upgrade, you will need to download firmware updates for your particular system model. In addition, there is a required patch for the LDoms Manager component (patch ID 145880-02). The latest version of the Oracle VM Server for SPARC 2.0 Release Notes contains information on the required & minimum software and firmware levels for each platform to take full advantage of all the features in 2.0.

The firmware patches can be downloaded from the Oracle Technology Network. The Oracle VM Server for SPARC 2.0 software can be downloaded from the Oracle Virtualization site. Finally, the LDoms Manager patch is available on SunSolve and My Oracle Support.

Wednesday Oct 27, 2010

Memory DR in Oracle VM Server for SPARC 2.0

One of the major new features in Oracle VM Server for SPARC 2.0 (in conjunction with Solaris 10 9/10 aka Solaris 10 Update 9) is the ability to dynamically add or remove memory from a running domain - so-called memory dynamic reconfiguration (or memory DR for short). There are a few important caveats to keep in mind when using the memory DR feature:
  • Requests to remove sizable amounts of memory can take a considerable amount of time
  • It is not guaranteed to successfully remove all the memory requested (a smaller amount can end up being removed)
  • Removing memory that was in use at boot time results in the "leaking" of the memory that was used to hold the mappings for the removed memory, making that (much smaller but not insignificant amount of) memory unavailable until the domain is rebooted

While these restrictions implicitly connote a set of reasonable recommendations (e.g. avoid using memory DR to remove a significant portion of a domain's memory), there is a particular use-case that's especially problematic: shrinking the factory-default control domain in advance of creating guest domains. We strongly recommend against relying on memory DR for this purpose. Although it is tempting to try because it would eliminate the need to reboot the control domain to complete the process, the risks outweigh the benefits.

This issue and a procedure to avoid it are described in the Initial Configuration of the Control Domain section of the Oracle VM Server for SPARC 2.0 Administration Guide. I recommend folks read it in its entirety. In brief, it involves the use of a new CLI operation introduced in 2.0 to explicitly place the control domain into a delayed reconfiguration mode:

# ldm start-reconf primary
Note that if you don't explicitly execute this command before you attempt to resize the memory of the factory-default control domain (and you're running 2.0 with Solaris 10 Update 9 or later), memory DR will be invoked by default, potentially leading to the issues listed above. Caveat Emptor!

Thursday Oct 07, 2010

Oracle VM Server for SPARC 2.0 released

We've just released version 2.0 of Oracle VM Server for SPARC (previously known as LDoms). This is a major new release of our SPARC virtualization technlogy. Here are just some of the new features:
  • Memory Dynamic Reconfiguration - resize your domain's memory while Solaris is running
  • Direct I/O - You can now assign individual PCI-E I/O devices to any domain
  • Affinity Binding for CPUs - optimizes placement of allocated strands onto cores to minimize unnecessary sharing of cores between domains
  • Support for Full Core Allocation - Optionally allocate CPU resources at the core, rather than strand, granularity. Guarantees only full cores are allocated to the domain, while enforcing a cap.
  • Improvements to domain migration performance and the relaxation of certain restrictions
  • Improvements to virtual disk multipathing to handle end-to-end I/O path failures between guest domain and storage device
  • More robust system restoration from XML files: the restoration of the primary domain can now be automated as well
  • Important fixes to MAC address assignment and more robust MAC address collision detection
  • New virtinfo(1) command and libv12n(3LIB) library to extract virtualization info from Solaris 10 9/10 or later
  • New power management capabilities on SPARC T3 based platforms

In addition, there are numerous smaller improvements and bug fixes. The official product documentation has all the details.

Oracle VM Server for SPARC 2.0 runs on all supported SPARC T-Series servers (i.e. T2, T2 Plus & T3 based platforms), including the just-announced SPARC T3 systems. As of this writing, version 2.0 is fully qualified on the T3-1 server (which is now available to order). We will announce full support for our other servers (including the existing UltraSPARC T2 and T2 Plus based platforms) in the coming weeks as we complete our qualification activities.

For further information, head over to Oracle's virtualization site.

Tuesday Aug 17, 2010

Oracle VM Server for SPARC

We are now six months post the acquisition of Sun by Oracle. Certainly a lot has changed, but with respect to SPARC virtualization, there's much more that hasn't changed. One big change for us is that Logical Domains has been re-branded as "Oracle VM Server for SPARC". Another adjustment is that we are more restricted in discussing future features & roadmap info than we were at Sun.

More importantly, here's what hasn't changed: the development team is essentially intact (we've lost a few and gained a few) and we remain fully engaged (in fact there was zero schedule impact caused by the acquisition). Our strategy and tactical roadmap have changed very little. If anything, Oracle is more committed to SPARC and our virtualization technology than Sun was. We continue to work on new features & releases exactly as we had been doing as part of Sun. Of course, there have been some tweaks to our priority & feature set, and we are working to better integrate our technology with other products within Oracle, but the overall strategic direction has not changed. The bottom line is that everything's full steam ahead for LDoms Oracle VM Server for SPARC.

Finally, go check out the story: Oracle VM Server for SPARC - Powering Enterprise-class Virtualization, currently featured at oracle.com, or go here.

Wednesday Jan 20, 2010

Logical Domains 1.3 Available

Logical Domains (LDoms) 1.3 is out! Some of the key new features include:
  • Dynamic Resource Management (DRM): policy-based load-balancing of cpu resources between domains
  • With the release of Solaris 10 10/09 (aka update 8), ability to dynamically add & remove crypto devices from domains
  • Significant speedup of warm migration (aka domain mobility) operations through HW crypto & compression algorithms
  • Faster network failover with link-based IPMP support
  • Domain hostid & MAC address properties can now be modified via the ldm set-domain command

DRM is a key new feature for which we've been getting many requests ever since LDoms launched in April 2007. With it, you can specify policies that determine how cpu resources are load-balanced between running logical domains. For each policy, you can specify (among other things) the min & max cpu count desired, the high & low utilization thresholds you'd like to stay within, the speed at which cpus are added or deleted, the time of day during which the policy is active, and a priority. Each specified policy can apply to one or more domains and can be active or inactive. Multiple policies can be active simultaneously, even if they affect the same domain(s). That's where the priority value comes in; lower numbers denote higher priority (valid values are 1-9999).

See complete details on setting up DRM policies in this section of the Logical Domains 1.3 Administration Guide

Further information about LDoms in general, and links to download the LDoms 1.3 components are available (as always) here.

Tuesday Sep 08, 2009

LDoms 1.2 available via IPS

Good news for folks running LDoms under OpenSolaris (either 2009.06 or the latest 2010.02 development builds): the LDoms 1.2 software is now available in the /release and /dev IPS repositories. You can get it by running 'pfexec pkg install ldomsmanager'

Saturday Jul 04, 2009

LDoms 1.2 is out!

LDoms 1.2 is now available. I've included a snippet of the initial marketing announcement below. There will be more official marketing activity in the coming weeks (including the updating of http://www.sun.com/ldoms), but the bits are available now.

The LDoms team is pleased to announce the availability of LDoms v1.2.

In this new release you will find the following features:

  • Support for CPU power management
  • Support for jumbo frames
  • Restriction of delayed reconfiguration operations to the control domain
  • Support for configuring domain dependencies
  • Support for autorecovery of configurations
  • Support for physical-to-virtual migration tool
  • Support for configuration assistant tools.
We look forward to you downloading LDoms Manager v1.2 now from here. The full documentation set for LDoms 1.2 can be found here.

In addition, as I mention here, this release fully enables the enhanced FMA IO fault diagnosis changes introduced into the Solaris 10 5/09 release (aka s10u7).

UPDATE: One important thing I forgot to mention is that this release supports OpenSolaris 2009.06 for SPARC. OpenSolaris can be run either as the control/io/service domain or within a guest domain. In addition, the LDoms 1.2 LDom Manager & related files are available as an IPS package in the /dev repository under SUNWldom.
CORRECTION: I jumped the gun: the IPS package with LDoms 1.2 is not quite ready yet, but should be available very soon. When it is, it'll be under "ldomsmanager".

Enjoy!

UPDATE 2: LDoms 1.2 is now available via IPS. Details here.

About

I work on the Oracle VM Server for SPARC (nee LDoms) team.

View Eric Sharakan's profile on LinkedIn

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today