Monday May 19, 2008

LDoms 1.0.3 Released

Logical Domains version 1.0.3 is now available for download. This version includes a number of feature enhancements delivered as part of a new LDom Manager (v1.0.3) and Solaris 10 Update (S10 5/08) release. New features include:
  • Disk enhancements for DVD boot support, User SCSI command (USCSICMD) input/output control pass-through and disk reset support, multihost (MHD) disks, disk DevID support, and format(1M) command fixes incl. support for unformatted vDisks.
  • Improved Volume Manager support – you can now export a volume as a full disk and install on it.
  • Network improvements include support to use physical trunk devices for external connectivity.
  • Support for interrupt statistics reporting in guest domains using intrstat(1M)
  • Solaris Cluster software support in guest domains. (PreReq: LDom Manager 1.0.3, S10U5+patches).
  • XML format produced by the 'ldm ls-constraints -x' command now conforms to the new version 3 (v3) specification. (See Eric's blog for more info.)
Refer to the LDoms 1.0.3 Release Notes for detailed list of features and bug fixes. The revised administration guide consists of a number of additions including a new section on virtual disk management.

Thursday Sep 27, 2007

Configuring vSwitch following a S10U4 (08/07) upgrade

With the release of Solaris 10 8/07 (aka S10U4), support was added to Solaris and the LDoms vSwitch for improved packet processing. The LDoms vSwitch can now program the MAC addresses of each vNet into the physical adapter, instead of configuring the shared physical adapter in promiscuous mode. An bug was introduced when adding this support to the LDoms vSwitch driver. When plumbed as a network device, the new vSwitch fails to program its own MAC address into the adapter, and hence cannot receive any packets destined for it from the external network.

In order to enable networking between the domains and the service domain, the LDoms 1.0 Admin guide (pg 42) recommends users to unplumb the physical adapter; and plumb the vSwitch instead. In order for this to function correctly, the LDoms vSwitch will need to be assigned the same MAC address as that of the physical adapter. This workaround is only necessary when the vSwitch is plumbed as a network device, and enable it to receive packets from the external network. A patch to address this issue is in the works and will be released soon.

The issue is also documented in the LDoms 1.0.1 release notes.

Wednesday Aug 08, 2007

Solaris and Linux - Running Side by Side

Now that we have a Linux LDom port, we can boot and run both operating systems concurrently in a LDom environment. See Ash's blog for an nice flash demo. The Linux LDom support was putback to the mainline as of the 2.6.23 build last month. Thanks once again to both David Miller and Fabio Massimo for their excellent work.

Wednesday Jul 18, 2007

LDoms VIO Failover - Part Deux

The article on VIO failover provided an overview of how failover support can be added to the virtual IO infrastructure in a LDoms environment. This support allows a client domain to failover the virtual disk and network from a primary service domain to the alternate (backup) service domain. In this blog, I outline the steps required to configure the primary and alternate service domains, and the client domain for failover. See here for DETAILED INSTRUCTIONS.

Wednesday Jun 27, 2007

Linux in a LDom

Thanks to the excellent work by David Miller, we now have support for running Linux in a LDom. Please see David's blog for more details.

Tuesday Jun 19, 2007

LDoms Virtual IO Failover

In the previous article on LDoms IO Virtualization I discussed how IO access is provided to a Logical Domain by proxying the capabilities of a physical device using virtual services and devices. One disadvantage of this approach is the inability to provide access to the physical IO devices when the service domain reboots or shuts down. On UltraSPARC-T1 processor ("Niagara") based systems like the T1000/T2000 this can be addressed using a simple failover solution where access to IO devices is exported from both a primary and an alternate backup service domain. The I/O bridge on these systems consists of two leafs that can be partitioned, owned and reset independently by two different service domains. The virtual disk and network devices in a LDom are then configured to use both service domains and failover from the primary to the alternate service domain in the event of a failure. Standard Solaris features like IP Multipathing (IPMP) and Solaris Volume Manager (SVM) mirroring are used for failure detection and triggering a failover to the alternate services. See figure below.
LDoms VIO

This configuration was implemented on a T2000 with files exported as vDisks, and two vNet devices connected to vSwitch services on the two service domains. A simple chess program and disk benchmark was run on the LDom (ldg1) to generate load. Subsequently, the primary service domain was rebooted triggering a failover to the alternate service domain. Note the very slight pause in the domain's operation at the time of failover. Also note that, the pings to primary service domain stall and then resume during reboot. In a subsequent blog I will post step-by-step instructions for setting up the configuration described here.

FOR A DEMO CLICK HERE.

NOTE: The ability to soft reset a PCI-E leaf or failover to an alternate service domain is not part of LDoms 1.0 and will require running LDoms release 1.0.1 or higher.

Sunday May 20, 2007

LDoms IO Virtualization

An UltraSPARC-T1 processor ("Niagara") based system like the T1000/T2000 has limited number of onboard network/disk devices and PCI-E explansion slots. Though the PCI-E bus ports on these systems can be split and assigned to two different domains, this is far less than the max number (32) of allowed logical domains. This lack of direct I/O device access is addressed by providing I/O access to domains via virtualized devices that communicate with a 'service' domain that completely owns a device along with its driver, and functions as a proxy to the device.
LDoms VIO

LDoms virtual device support comprises of devices and services through which IO functionality is provided to all logical domains by proxying the capabilities of a physical device. The current LDoms virtual IO functionality includes support for virtual networking, disk, and console along with their corresponding proxy servers. It also includes a general purpose infrastructure for inter-domain communication called Logical Domain Channels (LDC).

The virtual network support is implemented using two components, the virtual network and switch device. The virtual network (vnet) device is an simple Ethernet device, that communicates with other vnet devices in the system via the virtual switch and/or directly using a point-to-point connection. The virtual switch (vsw) device functions as both a layer-2 network switch and a (de)mux for packets sent/received through the physical network adapter. The virtual switch classifies incoming packets on the basis of target vNet MAC addr and switches the packets to the appropriate vnet device. Similarly it acts as the forwarding agent for all packets originating from the vNets and destined to clients outside the box.

The virtual disk infrastructure provides all logical domains access to block-level storage, though a virtual disk that is backed by a real disk, disk volume or file that is a physical resource of another domain i.e. the service domain. Though the virtual disks (vdc) appear as regular disks in the client domain, the corresponding physical disk is owned and exported by the virtual disk service (vds) running on the service domain. Using a simple request/response mechanism, the client virtual disk forwards all its disk requests to the virtual disk server, and passes back to its filesystem the response it receives from the server.

In addition to the disk and network infrastructure, the console IO from all domains except primary domain is redirected to the service domain instead of the system's service processor. The service domain acts as a concentrator for all console traffic and exports console access to domain users via Unix sockets. A service domain providing console services consists of two components: a virtual console concentrator (vcc) driver and the virtual NTS daemon (vntsd).

Finally, the core infrastructure for communication between devices, services and clients in all domains is provided via Logical Domain Channels (LDC). The LDC functionality consists of support both in the Hypervisor and Solaris, and is used to send small messages or share regions of memory between the logical domains.

Please refer to the LDoms Administration and Beginners guides for more information and examples on using virtual IO. In future blogs I will discuss common virtual IO tips, tricks and gotchas.

Friday Apr 27, 2007

LDoms 1.0 release !!!

Logical Domains (LDoms) 1.0 has now released. You can now get the firmware upgrade, and updates to Solaris 10 software, including the Logical Domain Manager, to enable domaining on your UltraSPARC-T1 processor based system. Please visit the product download page for more information.

Thursday Mar 15, 2007

An Introduction

So here it is, I am a blogger too - So why blog ....

The term was coined in 1999, and today Webster’s dictionary defines a blog as a “diary; a personal chronological log of thoughts published on a Web page.” More importantly, it says that blogs are “typically updated daily” and that “blogs often reflect the personality of the author.”

I have been planning to start blogging to document my adventures in the world of computing, both work and personal; a forum where I will share my technical and not so technical endeavours. And ofcourse, the primary goal still being; a blog to share information about my experiences with Sun products, some common gotchas and what I do at Sun.

So what do I do: I currently work on the Logical Domains (LDoms) project that brings virtualization support to Sun's Cool Threads servers. Traditionally on large computer systems, users dimension their systems to handle different load conditions by partitioning their hardware into smaller machines. Even though hardware partitions provide good isolation, it can at times result in under utilization of partition resources. Logical Domains allows users to dimension resources in a system by abstracting the underlying compute and IO resources. The Logical Domaining technology is built upon a platform virtualization software layer referred to as the 'Hypervisor'. This layer abstracts the harware detail of a machine from its operating systems. I currently lead a small team of developers working on the virtualization of IO resources like storage and networking in an LDoms environment. This team is part of a larger team responsible for implementing all the components needed as part of the entire LDoms software stack.

Although LDoms has not been officially released yet, code for LDoms has been available in both OpenSolaris and Solaris S10U3 releases since last year. The initial LDoms release is targeted at Sun's T1000 and T2000 platforms; and an early access version is available for download. Preliminary information about LDoms can also be obtained by reading the following LDoms Blueprint document.

I hope this first introductory blog gives everyone a preview of what I do at Sun and a quick overview of what is LDoms. In subsequent posts I will talk more about LDoms, IO virtualization and other fun stuff.

About

Narayan Venkat

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today