Tuesday Aug 20, 2013

Cross-CPU migration in LDoms 3.1

Oracle VM Server for SPARC (aka LDoms) 3.1 is now available. You can read all about it over at the Virtualization Blog. I want to focus on the enhancements we've made to cross-CPU migration.

Cross-CPU migration is the ability to migrate a running virtual machine (VM) from a SPARC server with one type of CPU chip to another of a different type. Prior to LDoms 3.1, the only way to perform such a migration was to specify a "cpu-arch" property of "generic" when first defining the VM. Doing so allowed VMs to be migrated between systems based on the T2, T2+, T3 and T4 processors. However, specifying this property results in the VM running without taking advantage of all the features of the underlying CPU, negatively impacting performance. The most serious impact was loss of hardware-based crypto acceleration on the T4 processors. This was unfortunately necessary, due to major changes in HW crypto acceleration introduced in T4.

With the release of our T5 and M5 processors and systems, we didn't want our customers to continue to suffer this penalty when performing cross-cpu migrations amongst these newer platforms. In order to accomplish that, we have defined a new migration "class" (i.e. another value for the "cpu-arch" property), named "migration-class1". With LDoms 3.1, you can now specify "cpu-arch=migration-class1" on your T4, T5, and M5 based systems, allowing those VMs to be migrated across systems based on those 3 processor families. And most important of all, your VMs will run with full HW crypto capabilities (not to mention supporting the larger page sizes available in these processors)!

Ideally we would have provided this new migration class at the time T5 and M5 were first made available, but for various reasons that was not feasible. It is only available now, with LDoms 3.1. And as with any new LDoms release, there are minimum Solaris and firmware versions required to get all the features of the release; please consult the relevant documentation for details.

Note that the "migration-class1" family does _not_ include the T2, T2+ or T3 processors. To perform cross-CPU migration between those processor families (or between them and T4), you must continue to use the "generic" migration class. In addition, live migrations from T2/T2+/T3 systems to T5/M5 systems (and vice-versa) are not supported. Now, you probably noticed that T4 is in both the "generic" and "migration-class1" families, so you might be tempted to migrate from (say) T2 to T5 in 2 hops, T2->T4 then T4->T5. The problem with that approach is that you'll need to switch the migration class of the VM after it is migrated to the T4 platform. Doing so requires the VM to be restarted, negating the benefit of live migration. For this reason, we recommend cold migration in this scenario.

Thursday May 24, 2012

Oracle VM Server for SPARC 2.2: Improved whole-core allocation

We've just released Oracle VM Server for SPARC 2.2! This release contains some major new features, along with numerous other improvements and (of course) bug fixes. I'll only briefly highlight the key new features here; visit Oracle's Virtualization Blog for more information on this release, including where to download the software, getting access the documentation, and providing pointers to blogs describing other features in the release. In this post, I'm going to focus on the improvements we've made in configuring your VMs using whole-core allocation.

First, the major new features:


  • Cross-CPU live migration - Now migrate between T-series systems of different CPU types & frequencies
  • Single-root I/O virtualization (SR-IOV) on Oracle's SPARC T3 and SPARC T4 platforms
  • Improved virtual networking performance through explicit configuration of extended mapin space

In addition, 2.2 includes two features first introduced via updates to 2.1: dynamic threading on the SPARC T4 processor, and support for the SPARC SuperCluster platform.


Whole core allocation improvements

In Oracle VM Server for SPARC 2.0, we introduced the concept of whole-core allocation, allowing users to optionally allocate CPU resources at the core, rather than strand, granularity. This feature was designed primarily to allow customers to be in compliance with Oracle's "Hard Partitions" CPU licensing requirements. As such, it included the following restrictions:


  • Existence and enforcement of a cap on the number of cores that could be allocated to a domain. This cap was set implicitly the first time whole-core allocation was requested for a domain.
  • Severe restrictions on when & how whole-core allocation mode can be entered & exited, and when the cap can be modified. In most cases, the domain needed to be stopped and unbound before effecting these changes.

Since introducing this capability, we've heard from several customers who want to use whole-core allocation, but for the purposes of improved performance and manageability, not for complying with the hard-partitioning licensing requirement. They found the existing CLI semantics untenable for their needs. Particular concerns raised included:


  • Inability to configure whole-core without a cap
  • Difficulty in changing the value of the cap
  • The requirement to stop & unbind the domain to enable/disable whole-core allocations

In response to these issues, we've enhanced the CLI options for managing whole-core allocations in this release. These new options allow you to enable whole-core allocation granularity without the onerous restrictions necessitated by the hard-partitioning CPU licensing requirements. Here's what we did:


  • Provided new, separate CLI options for enabling whole-core allocation and for adding/setting/removing a cap
  • Removed the restrictions on when you can switch in and out of whole-core mode, as long as no cap is set

Customers can continue to use the original whole-core CLI we introduced in 2.0 to comply with the hard partitioning requirements (we've not changed the semantics of those operations at all). However, we do recommend everyone migrate to these new CLI options, as they're clearer, more flexible, and suitable whether you're looking for whole-core allocation to meet the hard-partitioning requirement or solely for the other benefits of allocating cpu resources at the whole-core granularity. The key determining factor is whether or not you configure a cap on the number of cores in the domain; if you do, you're in compliance with the hard-partitioning licensing requirements, and incur the restrictions outlined above. If you configure whole-core allocation without a cap, we assume you're not interested in compliance with those terms, and are not bound by its restrictions.

Here is a brief synopsis of the new CLI options. For full details consult the Oracle VM Server for SPARC 2.2 Reference Manual (or the ldm(1M) man page if you have 2.2 installed).

To configure whole-core allocation with no cap:

ldm add-core num ldom
ldm set-core num ldom
ldm remove-core [-f] num ldom

To configure a cap for a domain separately from allocating cores to the domain:

ldm add-domain ... [max-cores=[num|unlimited]] ldom
ldm set-domain ... [max-cores=[num|unlimited]] ldom

Wednesday Jun 08, 2011

Oracle VM Server for SPARC 2.1 Released!

I only have time for a quick post this morning, but we're very excited to announce the release of Oracle VM Server for SPARC 2.1. This release contains one of our most requested features: live migration! There will be plenty of folks blogging & describing that feature in detail; in future blog posts I'm going to highlight some of the other lesser-known features of the release.

More to come!

Thursday Nov 18, 2010

Oracle VM Server for SPARC 2.0 available on T2 & T2 Plus Servers

As I mentioned last month, Oracle VM Server for SPARC 2.0 was initially qualified on only a limited number of T3-based systems. I'm happy to announce that we've now completed qualification of this latest release on our UltraSPARC T2 and T2 Plus platforms.

In order to upgrade, you will need to download firmware updates for your particular system model. In addition, there is a required patch for the LDoms Manager component (patch ID 145880-02). The latest version of the Oracle VM Server for SPARC 2.0 Release Notes contains information on the required & minimum software and firmware levels for each platform to take full advantage of all the features in 2.0.

The firmware patches can be downloaded from the Oracle Technology Network. The Oracle VM Server for SPARC 2.0 software can be downloaded from the Oracle Virtualization site. Finally, the LDoms Manager patch is available on SunSolve and My Oracle Support.

Wednesday Oct 27, 2010

Memory DR in Oracle VM Server for SPARC 2.0

One of the major new features in Oracle VM Server for SPARC 2.0 (in conjunction with Solaris 10 9/10 aka Solaris 10 Update 9) is the ability to dynamically add or remove memory from a running domain - so-called memory dynamic reconfiguration (or memory DR for short). There are a few important caveats to keep in mind when using the memory DR feature:
  • Requests to remove sizable amounts of memory can take a considerable amount of time
  • It is not guaranteed to successfully remove all the memory requested (a smaller amount can end up being removed)
  • Removing memory that was in use at boot time results in the "leaking" of the memory that was used to hold the mappings for the removed memory, making that (much smaller but not insignificant amount of) memory unavailable until the domain is rebooted

While these restrictions implicitly connote a set of reasonable recommendations (e.g. avoid using memory DR to remove a significant portion of a domain's memory), there is a particular use-case that's especially problematic: shrinking the factory-default control domain in advance of creating guest domains. We strongly recommend against relying on memory DR for this purpose. Although it is tempting to try because it would eliminate the need to reboot the control domain to complete the process, the risks outweigh the benefits.

This issue and a procedure to avoid it are described in the Initial Configuration of the Control Domain section of the Oracle VM Server for SPARC 2.0 Administration Guide. I recommend folks read it in its entirety. In brief, it involves the use of a new CLI operation introduced in 2.0 to explicitly place the control domain into a delayed reconfiguration mode:

# ldm start-reconf primary
Note that if you don't explicitly execute this command before you attempt to resize the memory of the factory-default control domain (and you're running 2.0 with Solaris 10 Update 9 or later), memory DR will be invoked by default, potentially leading to the issues listed above. Caveat Emptor!

Thursday Oct 07, 2010

Oracle VM Server for SPARC 2.0 released

We've just released version 2.0 of Oracle VM Server for SPARC (previously known as LDoms). This is a major new release of our SPARC virtualization technlogy. Here are just some of the new features:
  • Memory Dynamic Reconfiguration - resize your domain's memory while Solaris is running
  • Direct I/O - You can now assign individual PCI-E I/O devices to any domain
  • Affinity Binding for CPUs - optimizes placement of allocated strands onto cores to minimize unnecessary sharing of cores between domains
  • Support for Full Core Allocation - Optionally allocate CPU resources at the core, rather than strand, granularity. Guarantees only full cores are allocated to the domain, while enforcing a cap.
  • Improvements to domain migration performance and the relaxation of certain restrictions
  • Improvements to virtual disk multipathing to handle end-to-end I/O path failures between guest domain and storage device
  • More robust system restoration from XML files: the restoration of the primary domain can now be automated as well
  • Important fixes to MAC address assignment and more robust MAC address collision detection
  • New virtinfo(1) command and libv12n(3LIB) library to extract virtualization info from Solaris 10 9/10 or later
  • New power management capabilities on SPARC T3 based platforms

In addition, there are numerous smaller improvements and bug fixes. The official product documentation has all the details.

Oracle VM Server for SPARC 2.0 runs on all supported SPARC T-Series servers (i.e. T2, T2 Plus & T3 based platforms), including the just-announced SPARC T3 systems. As of this writing, version 2.0 is fully qualified on the T3-1 server (which is now available to order). We will announce full support for our other servers (including the existing UltraSPARC T2 and T2 Plus based platforms) in the coming weeks as we complete our qualification activities.

For further information, head over to Oracle's virtualization site.

Tuesday Aug 17, 2010

Oracle VM Server for SPARC

We are now six months post the acquisition of Sun by Oracle. Certainly a lot has changed, but with respect to SPARC virtualization, there's much more that hasn't changed. One big change for us is that Logical Domains has been re-branded as "Oracle VM Server for SPARC". Another adjustment is that we are more restricted in discussing future features & roadmap info than we were at Sun.

More importantly, here's what hasn't changed: the development team is essentially intact (we've lost a few and gained a few) and we remain fully engaged (in fact there was zero schedule impact caused by the acquisition). Our strategy and tactical roadmap have changed very little. If anything, Oracle is more committed to SPARC and our virtualization technology than Sun was. We continue to work on new features & releases exactly as we had been doing as part of Sun. Of course, there have been some tweaks to our priority & feature set, and we are working to better integrate our technology with other products within Oracle, but the overall strategic direction has not changed. The bottom line is that everything's full steam ahead for LDoms Oracle VM Server for SPARC.

Finally, go check out the story: Oracle VM Server for SPARC - Powering Enterprise-class Virtualization, currently featured at oracle.com, or go here.

Wednesday Jan 20, 2010

Logical Domains 1.3 Available

Logical Domains (LDoms) 1.3 is out! Some of the key new features include:
  • Dynamic Resource Management (DRM): policy-based load-balancing of cpu resources between domains
  • With the release of Solaris 10 10/09 (aka update 8), ability to dynamically add & remove crypto devices from domains
  • Significant speedup of warm migration (aka domain mobility) operations through HW crypto & compression algorithms
  • Faster network failover with link-based IPMP support
  • Domain hostid & MAC address properties can now be modified via the ldm set-domain command

DRM is a key new feature for which we've been getting many requests ever since LDoms launched in April 2007. With it, you can specify policies that determine how cpu resources are load-balanced between running logical domains. For each policy, you can specify (among other things) the min & max cpu count desired, the high & low utilization thresholds you'd like to stay within, the speed at which cpus are added or deleted, the time of day during which the policy is active, and a priority. Each specified policy can apply to one or more domains and can be active or inactive. Multiple policies can be active simultaneously, even if they affect the same domain(s). That's where the priority value comes in; lower numbers denote higher priority (valid values are 1-9999).

See complete details on setting up DRM policies in this section of the Logical Domains 1.3 Administration Guide

Further information about LDoms in general, and links to download the LDoms 1.3 components are available (as always) here.

Saturday Jul 04, 2009

LDoms 1.2 is out!

LDoms 1.2 is now available. I've included a snippet of the initial marketing announcement below. There will be more official marketing activity in the coming weeks (including the updating of http://www.sun.com/ldoms), but the bits are available now.

The LDoms team is pleased to announce the availability of LDoms v1.2.

In this new release you will find the following features:

  • Support for CPU power management
  • Support for jumbo frames
  • Restriction of delayed reconfiguration operations to the control domain
  • Support for configuring domain dependencies
  • Support for autorecovery of configurations
  • Support for physical-to-virtual migration tool
  • Support for configuration assistant tools.
We look forward to you downloading LDoms Manager v1.2 now from here. The full documentation set for LDoms 1.2 can be found here.

In addition, as I mention here, this release fully enables the enhanced FMA IO fault diagnosis changes introduced into the Solaris 10 5/09 release (aka s10u7).

UPDATE: One important thing I forgot to mention is that this release supports OpenSolaris 2009.06 for SPARC. OpenSolaris can be run either as the control/io/service domain or within a guest domain. In addition, the LDoms 1.2 LDom Manager & related files are available as an IPS package in the /dev repository under SUNWldom.
CORRECTION: I jumped the gun: the IPS package with LDoms 1.2 is not quite ready yet, but should be available very soon. When it is, it'll be under "ldomsmanager".

Enjoy!

UPDATE 2: LDoms 1.2 is now available via IPS. Details here.

Sunday May 03, 2009

Improving integration of LDoms and FMA

Scott Davenport recently posted a blog entry announcing the availability of Solaris 10 Update 7 (aka Solaris 10 5/09 or just s10u7), and touted some of the FMA improvements in the release. Of particular interest for LDoms and its users are improvements in diagnosis of IO faults when one or more IO root complexes are allocated to domains other than the control domain (i.e. so-called root domains). These improvements were designed & developed through a collaborative effort between the LDoms & FMA teams. The collaboration between these teams is nothing new; even before the initial release of Logical Domains technology in April 2007, as well as ever since, there has been tight coordination between LDoms and FMA, both in terms of the technology, and the teams.

The changes needed to resolve the IO diagnosis problems required new interfaces between the FMA and LDoms code, as well as new software on both sides of the interface. As Scott mentioned, the FMA software is now available in the Solaris 10 5/09 release. However, the necessary changes on the LDoms side (in the LDoms Manager) will be available in the upcoming 1.2 release of Logical Domains, currently scheduled for this summer. That's no reason not to install s10u7 in a Logical Domains environment now; all currently supported versions of LDoms will function correctly with s10u7 installed in a control domain, guest domains, or both. And once LDoms 1.2 is released, simply installing the new firmware & LDoms Manager that make up the 1.2 update on a system running s10u7 will automatically enable the improved IO diagnosis features (along with several other new LDoms features, but that's the subject of another post).

Wednesday Dec 24, 2008

LDoms 1.1

Just in time for the holidays, LDoms version 1.1 is now available! This is a major new release of Logical Domains technology, with an extensive list of new features & bugfixes. Here are the highlights:

Major Features Introduced in LDoms version 1.1:

  • Warm and Cold Migration
  • Network NIU Hybrid IO
  • VLAN Support for Virtual Network Interface and Virtual Switch
  • Public XML Interface and XMPP Connection with the Domain Manager
  • Virtual IO DR
  • Virtual Disk Multipathing and Failover
  • Virtual Switch Support for Link Aggregated Interface
  • iostat(1M) Support in Guest Domains

Alex has more details on these features here.

Other Improvements Include:

  • Improved Interrupt Distribution (CR 6671853)
  • Performance improvements for virtual IO (CR 6689871, 6640564)
  • Solaris Installation to Single Slice Disk (CR 6685162)
  • Numerous Improvements and Extension to our Domain Services Infrastructure (CR 6560890)
  • Improved Console Behavior when not using Virtual Console (CR 6581309)
  • VDisk EFI Label Support for Disk Image (CR 6558966)
  • VDisk Support for Disk Managed by Multipathing Software (Veritas DMP, EMC Powerpath) (CR 6694540, 6637560)
  • LDoms Manager Improvements to IO FMA (CR 6463270)
  • ldm list -l now displays MAC assigned to guest (CR 6586046)
  • Improved Error Messages (CR 6741733, 6590124, 6715063)
  • ldm list -o provides fine-grained control of configuration display options (CR 6562222)
  • More accurate utilization percentage reporting (CR 6637955, 6709020)
  • Can now explicitly set a domain's hostid (CR 6670605)
  • Improved persistence of VIO service and VDS volume names (CR 6729544, 6771264)
  • More predictable behavior when deciding which cpus to DR out of a domain (CR 6567372)
  • More accurate annotations in ldm ls-spconfig output (CR 6744046)
  • Better support for large, fragmented memory configs in a domain (CR 6749507)
  • Supports setting persistent WANboot keys from OBP (CR 6510365)
  • Lots of Bug Fixes (over 100)

Get it here!

Monday Oct 13, 2008

New Sun SPARC Enterprise T5440 Server runs LDoms 1.0.3

Today Sun is announcing the latest in our line of sun4v SPARC CMT systems: the Sun SPARC Enterprise T5440 Server. This is a four socket server based on our UltraSPARC T2 Plus processor.

With up to 256 available threads, this is the best platform yet for running our Logical Domains (LDoms) virtualization technology. LDoms 1.0.3, released last May, fully supports all shipping configurations of the T5440. All the necessary firmware & software comes pre-installed. If you need to download any of the LDoms 1.0.3 software, just go here.

In addition, there's a new resource available for helping administrators get the most out of their LDoms installation: the LDoms Community Cookbook. It just went live today; in fact, at the time of this writing, not all sections are live yet. Please check back often, and remember, this is a Wiki & a community resource, so feel free to add content, make corrections, etc.

To read what other Sun engineers have to say about the T5440, see Allan Packer's blog for an updated list of T5400-related blog entries.


Note: the Sun SPARC Enterprise T5440 Server is not to be confused with our recently anounced Sun Netra T5440 Server, which is a two socket carrier-grade server.

Monday May 19, 2008

LDoms 1.0.3

Logical Domains (LDoms) 1.0.3 is now available. This release is mainly intended to enable many new features included in Solaris 10 5/08 (aka update 5). Jason did an excellent job with details & logistics in this blog entry, so there's no need in my repeating that info here.

One thing I do want to mention in terms of LDom Manager functionality is that with this release, the XML format produced by the

ldm ls-constraints -x
command has changed. While the LDom Manager will continue to accept our previous, so-called v2 XML format, as of 1.0.3, it also accepts & produces the new v3 format. This format is designed to closely align with the schema defined as part of the draft Open Virtual Machine Format (OVF) specification.

This is just the tip of the iceberg. Coming in LDoms 1.1 (currently targeted for release in Q4CY08) will be a complete XML based control interface for monitoring & managing Logical Domains, based on this same v3 schema. In addition, it will utilize the XMPP transport, providing secure, standards-based XML messaging between client application & the LDom Manager.

This combination of a standards-based schema over a standard XML transport provides a rich control interface for creating management applications. More details about this new management interface, including detailed specifications, will be forthcoming.

Thursday Oct 11, 2007

UltraSPARC T2 & LDoms 1.0.1

Sun is officially announcing the first products in its UltraSPARC T2 (US T2) based platform lineup today, the T5x20 series. You can read all about the details here and here. There are two big stories here related to our Logical Domains technology. For those of you who are new to my blog, Logical Domains (LDoms for short) is the name of Sun's virtualization technology for our SPARC CMT platforms that allows multiple operating systems to run concurrently on a single system.

These products represent the first of our CMT based platforms that are shipping with LDoms technology pre-installed from the factory. All future CMT servers from Sun will ship with the ability to run Logical Domains out of the box. This includes the LDoms-enabled hypervisor (our LDoms hypervisor runs on bare metal, and is embedded in the firmware of the platform), all the necessary Solaris components, and the LDoms Manager package (which is what my team works on).

This further represents the introduction of version 1.0.1 of LDoms technology. Besides support for the new US T2 based platforms (and a slew of bug fixes), this release supports the ability to reset any domain, even one which owns physical I/O devices, while all other domains continue to run. Even the control domain, i.e. the one on which the LDoms Manager runs, can reboot while all other domains stay up. This represents a major step forward in terms of RAS capability for LDoms.

As of today, LDoms version 1.0.1 is only available pre-installed on our newly announced US T2 based servers. Stay tuned here for information on the impending availability of this upgrade on our existing US T1 based platforms!

UPDATE: LDoms version 1.0.1 is now available for download here. This includes the firmware updates for US T1 platforms.

WARNING: There are two important caveats when upgrading from LDoms 1.0 to LDoms 1.0.1:

  • Configurations saved to the service processor under 1.0 are not usable under 1.0.1. The LDoms 1.0.1 Administration Guide describes the upgrade procedure that needs to be applied to work around this. Part of this procedure needs to be carried out BEFORE performing the actual upgrade!
  • You must upgrade both the firmware and LDom Manager components at the same time.
About

I work on the Oracle VM Server for SPARC (nee LDoms) team.

View Eric Sharakan's profile on LinkedIn

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today