Thursday Oct 29, 2009

Enabling xVM on OpenSolaris

Another significant usability improvement that landed in build 126 is Gary and Bill's work on enabling Xen. Now, running xVM should be as simple as:

# pkg install xvm-gui
# echo 'set zfs:zfs_arc_max = 0x10000000' >>/etc/system # yes, you still need this, sadly
# svcadm enable -r milestone/xvm
# reboot

There's also a new Visual Panel for doing this if you prefer a graphical method. More in the flag day message.

Tags:

Dry-run migration

As part of our ongoing work on improving the ease of use of xVM, the newly available build 126 of OpenSolaris has my putback for:

6878952 Would like dry-run migration

This feature is useful for doing a simple check as to whether a guest can successfully migrate to another dom0 host. For example, domu-221 here is using a disk path that doesn't exist on the remote host hiss:

# virsh migrate --dryrun domu-221 xen:/// hiss    
error: POST operation failed: xend_post: error from xen daemon:
(xend.err 'Remote server error: Access to vbd:768 failed: error: "/iscsi/nevada-hvm" is not a valid block device.')

This works both with running and shutdown guests. Currently, the checks are fairly limited: are disks of the same path available on the remote host (note there is no checking of GUIDs or whatever to verify they really are the same piece of shared storage); is there enough memory on the remote host; and is the remote host the same CPU vendor. We expect these checks to improve both in scope and in reliability in the future.

Tags:

Thursday Oct 15, 2009

xVM and COMSTAR iSCSI

I recently had cause to try out COMSTAR for the first time, and I thought I'd write up the steps needed. Unfortunately, it's considerably more complex than the fall-over-easy shareiscsi=on ZFS feature.

Configuring the COMSTAR server

First install the storage-server packages and enable the services:

# svcadm enable -r stmf
# svcadm enable -r iscsi/target

We want to create a target group for each of our xVM guests, each of which will have one LUN in it. After creating the LUN, we define a "view" that allows that LUN to be visible for that target group:

# stmfadm create-tg domu-226
# zfs create -V 15G export/domu-226
# stmfadm create-lu /dev/zvol/rdsk/export/domu-226
Logical unit created: 600144F0C73ABF0F00004AD75DF2001A
# stmfadm add-view -t domu-226 600144F0C73ABF0F00004AD75DF2001A

Now we need to create the iSCSI target for this target group, that has our single LUN in it.

# itadm create-target -l domu-226
Target iqn.1986-03.com.sun:02:b8596bb9-9bb9-40e9-8cda-add6073ece46 successfully created

Here (finally) is our iSCSI Alias we can use in the clients. But we're not done yet. By default, this target will be able to see all LUNs not in a target group. So we need to make it a member of our domu-226 target group:

# stmfadm add-tg-member -g domu-226 iqn.1986-03.com.sun:02:b8596bb9-9bb9-40e9-8cda-add6073ece46
# stmfadm list-tg -v
Target Group: domu-226
        Member: iqn.1986-03.com.sun:02:b8596bb9-9bb9-40e9-8cda-add6073ece46

Configuring the iSCSI initiator (client)

We do this in the usual manner:

# svcadm enable -r svc:/network/iscsi/initiator:default
# iscsiadm add discovery-address 10.6.70.43:3260
# iscsiadm modify discovery --sendtargets enable

Installing a guest onto the LUN

We went through the above gymnastics so we can have a human-readable Alias for each of the domu's root LUNs. So now we can do:

# virt-install --paravirt --name domu-226 --ram 1024 --os-type solaris --os-variant opensolaris \\
  --location nfs:10.5.235.28:/export/nv/x/latest --network bridge,mac=00:14:4f:0f:b5:3e \\
  --disk path=/alias/domu-226,driver=phy,subdriver=iscsi \\
  --nographics

Tags:

Monday Jan 26, 2009

OpenSolaris 2008.11 as a dom0

UPDATE: the canonical location for this information is now here - please check there, as it will be updated as necessary, unlike this blog entry.

As a final part to my entries on OpenSolaris and Xen, let's go through the steps needed to turn OpenSolaris into a dom0. Thanks to Trevor O for documenting this for 2008.05. And as before, expect this process to get much, much, easier soon!

I'm going to do the work in a separate BE, so if we mess up, we shouldn't have broken anything. So, first we create our BE:

$ pfexec beadm create -a -d xvm xvm
First, let's install the packages. If you've updated to the development version, a simple pkg install xvm-gui will work, but let's assume you haven't:

$ pfexec beadm mount xvm /tmp/xvm-be
$ pfexec pkg -R /tmp/xvm-be install SUNWvirt-manager SUNWxvm SUNWvdisk SUNWvncviewer
$ pfexec beadm umount xvm

Now we need to actually reboot into Xen. Unfortunately beadm is not yet aware of how to do this, so we'll have to hack it up. We're going to run some awk over the menu.lst file which controls grub:

$ awk '
/\^title/ { xvm=0; }
/\^title.xvm$/ { xvm=1; }
/\^(splashimage|foreground|background)/ {
    if (xvm == 1) next
}
/\^kernel\\$/ {
    if (xvm == 1) {
       print("kernel\\$ /boot/\\$ISADIR/xen.gz")
       sub("\^kernel\\\\$", "module$")
       gsub("console=graphics", "console=text")
       gsub("i86pc", "i86xpv")
       $2=$2 " " $2
    }
}
{ print }' /rpool/boot/grub/menu.lst >/var/tmp/menu.lst.xvm

Let's check that the awk script (my apologies) worked properly:

$ tail /var/tmp/menu.lst.xvm 
...
#============ End of LIBBE entry =============
title xvm
findroot (pool_rpool,0,a)
bootfs rpool/ROOT/xvm
kernel$ /boot/$ISADIR/xen.gz
module$ /platform/i86xpv/kernel/$ISADIR/unix /platform/i86xpv/kernel/$ISADIR/unix -B $ZFS-BOOTFS,console=text
module$ /platform/i86pc/$ISADIR/boot_archive
#============ End of LIBBE entry =============

Looks good. We'll move it into place, and reboot:

$ pfexec cp /rpool/boot/grub/menu.lst /rpool/boot/grub/menu.lst.saved
$ pfexec mv /var/tmp/menu.lst.xvm /rpool/boot/grub/menu.lst
$ pfexec reboot

This should boot you into xVM. If everything worked OK, let's enable the services:

$ svcadm enable -r xvm/virtd ; svcadm enable -r xvm/domains

At this point, you should be able to merrily go ahead and install domains!

Update: Todd Clayton pointed out the issue I've filed here: SUNWxvm needs to depend on SUNWvdisk. I've updated the instructions above with the workaround.

Update update: Rich Burridge has fixed it. Nice!

Tags:

Thursday Dec 11, 2008

OpenSolaris 2008.11 guest domain on a Linux dom0

My previous blog post described how to install OpenSolaris 2008.11 on a Solaris dom0 under Xen. This also works on with a Linux dom0. However, since upstream is missing some of our dom0 fixes, it's unfortunately more complicated. In particular, we can't use virt-install, as it doesn't know about Solaris ISOs, and later on, we can't use pygrub to boot from ZFS, since it doesn't know how to read such a filesystem. Bear with me, this gets a little awkward.

This example is using a 32-bit Fedora 8 installation. Your milage is likely to vary if you're using a different version, or another Linux distribution. First some of the configuration parameters you might want to change:

export name="domu-224"
export iso="/isos/osol-2008.11.iso"
export dompath="/export/guests/2008.11"
export rootdisk="$dompath/root.img"
export unixfile="/platform/i86xpv/kernel/unix"

If you're on 64-bit Linux, set unixfile="/platform/i86xpv/kernel/amd64/unix" instead. We need to create ourselves a 10Gb root disk:

mkdir -p $dompath
dd if=/dev/zero count=1 bs=$((1024 \* 1024)) seek=10230 of=$rootdisk

Now let's use the configuration we need to install OpenSolaris:

cat >/tmp/domain-$name.xml <<EOF
<domain type='xen'>
 <name>$name</name>
 <bootloader>/usr/bin/pygrub</bootloader>
 <bootloader_args>--kernel=/platform/i86xpv/kernel/unix --ramdisk=/boot/x86.microroot</bootloader_args>
 <memory>1048576</memory>
 <on_reboot>destroy</on_reboot>
 <devices>
  <interface type='bridge'>
   <source bridge='eth0' />
   <--
       If you have a static DHCP setup, add the domain's MAC address here
       <mac address='00:16:3e:1b:e8:18' />
   -->
  </interface>
  <disk type='file' device='cdrom'>
   <driver name='file' />
   <source file='$iso' />
   <target dev='xvdc:cdrom' />
  </disk>
  <disk type='file' device='disk'>
   <driver name='file' />
   <source file='$rootdisk' />
   <target dev='xvda' />
  </disk>
 </devices>
</domain>
EOF

And start up the domain:

virsh create /tmp/domain-$name.xml
virsh console $name

Now you're dropped into the domain's console, and you can use the VNC trick I described to do the install. Answer the questions, wait for the domain to DHCP, then:

domid=`virsh domid $name`
ip=`/usr/bin/xenstore-read /local/domain/$domid/ipaddr/0`
port=`/usr/bin/xenstore-read /local/domain/$domid/guest/vnc/port`
/usr/bin/xenstore-read /local/domain/$domid/guest/vnc/passwd
vncviewer $ip:$port

At this point, you can proceed with the installation as normal. Before you reboot though, we need to do some tricks, due to the lack of ZFS support mentioned above. Whilst still in the live CD environment, bring up a terminal. We need to copy the new kernel and ramdisk to the Linux dom0. We can automate this via a handy script:

#/bin/bash

dom0=$1
dompath=$2
unixfile=/platform/i86xpv/kernel/$3/unix

root=`pfexec beadm list -H |  grep ';N\*R;' | cut -d \\; -f 1`
mkdir /tmp/root
pfexec beadm mount $root /tmp/root 2>/dev/null
mount=`pfexec beadm list -H $root | cut -d \\; -f 4`
pfexec bootadm update-archive -R $mount
scp $mount/$unixfile root@$dom0:$dompath/kernel.$root
scp $mount/platform/i86pc/$3/boot_archive root@$dom0:$dompath/ramdisk.$root
pfexec beadm umount $root 2>/dev/null
echo "Kernel and ramdisk for $root copied to $dom0:$dompath"
echo "Kernel cmdline should be:"
echo "$unixfile -B zfs-bootfs=rpool/ROOT/$root,bootpath=/xpvd/xdf@51712:a"

For example, we might do:

/tmp/update_dom0 linux-dom0 /export/guests/2008.11
or on 64-bit:
/tmp/update_dom0 linux-dom0 /export/guests/2008.11 amd64

Now, you can finish the installation by clicking the reboot button. This will shut down the domain, ready to run. But first we need the configuration file for running the domain:

cat >/$dompath/$name.xml <<EOF
<domain type='xen'>
 <name>$name</name>
 <os>
  <kernel>$dompath/kernel.opensolaris</kernel>
  <initrd>$dompath/ramdisk.opensolaris</initrd>
  <cmdline>$unixfile -B zfs-bootfs=rpool/ROOT/opensolaris,bootpath=/xpvd/xdf@51712:a</cmdline>
 </os>
 <memory>1048576</memory>
 <devices>
  <interface type='bridge'>
   <source bridge='eth0'/>
  </interface>
  <disk type='file' device='disk'>
   <driver name='file' />
   <source file='$rootdisk' />
   <target dev='xvda' />
  </disk>
 </devices>
</domain>

virsh define $dompath/$name.xml
virsh start $name
virsh console $name

It should be booting, and you're (finally) done!

Updating the guest

Unfortunately we're not quite out of the woods yet. What we have works fine, but if we update the guest via pkg image-update, we'll need to make changes in dom0 to boot the new boot environment. The update_dom0 script above will do a fine job of copying out the new kernel and ramdisk for the BE that's active on reboot, but you also need to edit the config file. For example, if I wanted to boot into the new BE called opensolaris-1, I'd replace these lines:

<kernel>$dompath/kernel.opensolaris</kernel>
<initrd>$dompath/ramdisk.opensolaris</initrd>
<cmdline>$unixfile -B zfs-bootfs=rpool/ROOT/opensolaris,bootpath=/xpvd/xdf@51712:a</cmdline>

with these:

<kernel>$dompath/kernel.opensolaris-1</kernel>
<initrd>$dompath/ramdisk.opensolaris-1</initrd>
<cmdline>$unixfile -B zfs-bootfs=rpool/ROOT/opensolaris-1,bootpath=/xpvd/xdf@51712:a</cmdline>

then re-configure the domain (whist it's shut down) via virsh undefine $name ; virsh define $dompath/$name.xml.

Yes, we're aware this is rather over-complicated. We're trying to find the time to send our changes to virt-install upstream, as well as ZFS support. Eventually this will make it much easier to use a Linux dom0.

Tags:

Wednesday Dec 10, 2008

OpenSolaris 2008.11 as a para-virtual Xen guest

UPDATE: the canonical location for this information is now here - please check there, as it will be updated as necessary, unlike this blog entry.

As well obviously working with VirtualBox, OpenSolaris can also run as a guest domain under Xen. The installation CD ships with the paravirtual extensions so you can run it as a fully para-virtualized guest. This provides a significant advantage over fully-virtualized guests, or even guests with para-virtual drivers like Solaris 10 Update 6. Of course, if you choose to, you can still run OpenSolaris fully-virtualized (a.k.a. HVM mode), but there's little advantage to doing so.

One slight wrinkle is that Solaris guests don't yet implement the virtual framebuffer that the Xen infrastructure supports. Since OpenSolaris doesn't yet have a text-mode install, this means that to install such a PV guest, we need a way to bring up a graphical console.

With 2008.11, this is considerably easier. Presuming we're running a Solaris dom0 (either Nevada or OpenSolaris, of course), let's start an install of 2008.11:

# zfs create rpool/zvol
# zfs create -V 10G rpool/zvol/domu-220-root
# virt-install --nographics --paravirt --ram 1024 --name domu-220 -f /dev/zvol/dsk/rpool/zvol/domu-220-root -l /isos/osol-2008.11.iso

This will drop you into the console for the guest to ask you the two initial questions. Since they're not really important in this circumstance, you can just choose the defaults. This example presumes that you have a DHCP server set up to give out dynamic addresses. If you only hand out addresses statically based on MAC address, you can also specify the --mac option. As OpenSolaris more-or-less assumes DHCP, it's recommended to set one up.

Now we need a graphical console in order to interact with the OpenSolaris installer. If the guest domain successfully finished booting the live CD, a VNC server should be running. It has recorded the details of this server in XenStore. This is essentially a name/value config database used for communicating between guest domains and the control domain (dom0). We can start a VNC session as follows:

# domid=`virsh domid domu-220`
# ip=`/usr/lib/xen/bin/xenstore-read /local/domain/$domid/ipaddr/0`
# port=`/usr/lib/xen/bin/xenstore-read /local/domain/$domid/guest/vnc/port`
# /usr/lib/xen/bin/xenstore-read /local/domain/$domid/guest/vnc/passwd
DJP9tYDZ
# vncviewer $ip:$port

At the VNC password prompt, enter the given password, and this should bring up a VNC session, and you can merrily install away.

Implementation

The live CD runs a transient SMF service system/xvm/vnc-config. If it finds itself running on a live CD, it will generate a random VNC password, configure application/x11/x11-server to start Xvnc, and write the values above to XenStore. When application/graphical-login/gdm starts, it will read these service properties and start up the VNC server. The service system/xvm/ipagent tracks the IPv4 address given to the first running interface and writes it to XenStore.

By default, the VNC server is configured not to run post-installation due to security concerns. This can be changed though, as follows:

# svccfg -s x11-server
setprop options/xvm_vnc = "true"

Please remember that VNC is not secure. Since you need elevated privileges to read the VNC password from XenStore, that's sufficiently protected, as long as you always run the VNC viewer locally on the dom0, or via SSH tunnelling or some other secure method.

Note that this works even with a Linux dom0, although you can't yet use virt-install, as the upstream version doesn't yet "know about" OpenSolaris (more on this later).

Tags:

Tuesday Jul 29, 2008

Direct mounting of files

As part of my work on Least Privilege for xVM, I worked on implementing direct file mounts. The idea is that we'd modify the Solaris support in virt-install to use these direct mounts, instead of the more laborious older method required.

A long-standing peeve of Solaris users is that in order to mount a file system image (in particular a DVD ISO image), it's a two-step process. This was less than ideal, as many other UNIX OS's made it simple to do: you'd just pass the file to the mount command, along with a special option or two, and it mounts it directly.

With my putback of 6384817 Need persistent lofi based mounts and direct mount(1m) support for lofi, this is now possible (in fact, a little easier) in Solaris. Instead of doing this:

# device=`lofiadm -a /export/solarisdvd.iso`
# mount -F hsfs $device /mnt/iso
...
# umount /mnt/iso
# lofiadm -d /export/solarisdvd.iso

it's just:

# mount -F hsfs /export/solarisdvd.iso /mnt/iso
...
# umount /export/solarisdvd.iso

Under the hood, this still uses the lofi driver, it's just automatically used at mount and unmount time. There's no need for an -o loop option as on Linux.

This is supported for most of the file systems you might need in Solaris, namely ufs, hsfs, udfs, and pcfs. This doesn't work for ZFS, as this has its own method for mounting file system images.

I was asked a couple of times why I implemented this in the kernel at all (which meant requiring file system support via vfs_get_lofi(). This was primarily to allow non-root users to access file mounts; in fact this was the primary motivation for implementing this feature from the point of view of the xVM work. In particular, if you have PRIV_SYS_MOUNT, you can do direct file mounts as well as normal mounts. This is important for virt-install, which we want to avoid running as root, but needs to be able to mount DVDs to grab the booting information for when installing a guest.

As always, there's more work that could be done. mount is not smart about relative paths, and should notice (and correct) early if you try pass a relative path as the first argument. Solaris has always (rather annoyingly) required an -F option to identify what kind of file system you're mounting, which is particularly pedantic of it. Equally the lofi driver doesn't comprehend fdisk or VTOC layouts.

Tags:

Friday Apr 18, 2008

xVM Under The Hood: seg_mf

An occasional series wherein I'll describe a part of the xVM implementation. Today, I'll be talking about seg_mf. You may want to read through my explanation of live migration and MMU virtualization first.

The control domain (dom0) often needs access to memory pages that belong to a running guest domain. The most obvious example of this is in constructing the domain during boot, but it's also needed for mapping the shared virtual guest console page, generating guest domain core dumps, etc.

This is implemented via the privcmd driver. Each process that needs to map some area of a guest domain's memory maps a range of anonymous virtual memory. The process then sends a request to the driver to map in a given range or set of machine frames into the given virtual address range. The two requests (IOCTL_PRIVCMD_MMAP and IOCTL_PRIVCMD_MMAP_BATCH) are more or less the same, although the latter allows the user to track MFNs that couldn't be mapped (see below).

Both ioctl()s hook into the seg_mf code. This is a normal Solaris segment driver (see Solaris Internals) with a hook that's used to store the arrays of MFN values that each VA range is to be backed by. This segment driver is a little unusual though: it does not support demand faulting. That is, every page in the segment is faulted in (and locked in) at the time of the ioctl(). This is needed to support the error-reporting interface described below, but it also helps simplify the driver significantly.

To fault the range, we go through each page-size chunk in the mapping. We need to establish a mapping from the virtual address of the chunk to the actual machine frame holding the page owned by the guest domain. This happens in segmf_faultpage(). The HAT isn't used to our strange request, so we load a temporary mapping at the given VA, and replace that with a mapping to the real underlying MFN via HYPERVISOR_update_va_mapping_otherdomain().

Normally, the MFNs given via the ioctl() should be mappable. One exception is HVM live migration. This was implemented, somewhat confusingly, to use the same interfaces but pass GMFNs not MFNs. In particular, for HVM guests, a guest MFN (what a guest thinks is a real machine frame number) is actually a pseudo-physical frame number. As a result, due to ballooning, or PV drivers, etc., this GMFN may not have a real MFN backing it, so the attempt to map it will fail. We mark the MFN as failed in the outgoing array of IOCTL_PRIVCMD_MMAP_BATCH and let the client deal with it. This is generally OK, since the iterative nature of live migration means we can still get to all the pages we need.

One nice enhancement would be to extend pmap to recognise such mappings. In particular qemu-dm has a bunch of such mappings. It'd be relatively easy to mark such mappings as coming from seg_mf. Extra marks for listing the MFN ranges too, though that's a little harder :)

Tags:

Friday Feb 01, 2008

DTrace on xenstored

DTrace support for xenstored has just been merged in the upstream community version of Xen. Why is it useful?

The daemon xenstored runs in dom0 userspace, and implements a simple 'store' of configuration information. This store is used for storing parameters used by running guest domains, and interacts with dom0, guest domains, qemu, xend, and others. These interactions can easily get pretty complicated as a result, and visualizing how requests and responses are connected can be non-obvious.

The existing community solution was a 'trace' option to xenstored: you could restart the daemon and it would record every operation performed. This worked reasonably well, but was very awkward: restarting xenstored means a reboot of dom0 at this point in time. By the time you've set up tracing, you might not be able to reproduce whatever you're looking at any more. Besides, it's extremely inconvenient.

It was obvious that we needed to make this dynamic, and DTrace USDT (Userspace Statically Defined Tracing) was the obvious choice. The patch adds a couple of simple probes for tracking requests and responses; as usual, they're activated dynamically, so have (next to) zero impact when they're not used. On top of these probes I wrote a simple script called xenstore-snoop. Here's a couple of extracts of the output I get when I start a guest domain:

# /usr/lib/xen/bin/xenstore-snoop 
DOM  PID      TX     OP
0    100313   0      XS_GET_DOMAIN_PATH: 6 -> /local/domain/6
0    100313   0      XS_TRANSACTION_START:  -> 930
0    100313   930    XS_RM: /local/domain/6 -> OK
0    100313   930    XS_MKDIR: /local/domain/6 -> OK
...
6    0        0      XS_READ: /local/domain/0/backend/vbd/6/0/state -> 4
6    0        0      XS_READ: device/vbd/0/state -> 3
0    0        -      XS_WATCH_EVENT: /local/domain/6/device/vbd/0/state FFFFFF0177B8F048
6    0        -      XS_WATCH_EVENT: device/vbd/0/state FFFFFF00C8A3A550
6    0        0      XS_WRITE: device/vbd/0/state 4 -> OK
0    0        0      XS_READ: /local/domain/6/device/vbd/0/state -> 4
6    0        0      XS_READ: /local/domain/0/backend/vbd/6/0/feature-barrier -> 1
6    0        0      XS_READ: /local/domain/0/backend/vbd/6/0/sectors -> 16777216
6    0        0      XS_READ: /local/domain/0/backend/vbd/6/0/info -> 0
6    0        0      XS_READ: device/vbd/0/device-type -> disk
6    0        0      XS_WATCH: cpu FFFFFFFFFBC2BE80 -> OK
6    0        -      XS_WATCH_EVENT: cpu FFFFFFFFFBC2BE80
6    0        0      XS_READ: device/vif/0/state -> 1
6    0        0      [ERROR] XS_READ: device/vif/0/type -> ENOENT
...

This makes the interactions immediately obvious. We can observe the Xen domain that's doing the request, the PID of the process (this only applies to dom0 control tools), the transaction ID, and the actual operations performed. This has already proven of use in several investigations.

Of course this being DTrace, this is only part of the story. We can use these probes to correlate system behaviour: for example, xenstored transactions are currently rather heavyweight, as they involve copying a large file; these probes can help demonstrate this. Using Python's DTrace support, we can look at which stack traces in xend correspond to which requests to the store; and so on.

This feature, whilst relatively minor, is part of an ongoing plan to improve the observability and RAS of Xen and the solutions Sun are building on top of it. It's very important to us to bring Solaris's excellent observability features to the virtualization space: you've seen the work with zones in this area, and you can expect a lot more improvements for the Xen case too.

IRC

I meant to say: after my previous post, I resurrected #opensolaris-dev: if you'd like to talk about OpenSolaris development in a non-hostile environment, please join!

Tags:

Thursday Dec 06, 2007

Xen compatibility with Solaris

Maintaining the compatibility of hardware virtualization solutions can be tricky. Below I'll talk about two bugs that needed fixes in the Xen hypervisor. Both of them have unfortunate implications for compatibility, but thankfully, the scope was limited.

6616864 amd64 syscall handler needs fixing for xen 3.1.1

Shortly after the release of 3.1.1, we discovered that all 64-bit processes in a Solaris domain would segfault immediately. After much debugging and head-scratching, I eventually found the problem. On AMD64, 64-bit processes trap into the kernel via the syscall instruction. Under Xen, this will obviously trap to the hypervisor. Xen then 'bounces' this back to the relevant OS kernel.

On real hardware, %rcx and %r11 have specific meanings. Prior to 3.1.1, Xen happened to maintain these values correctly, although the layout of the stack is very different from real hardware. This was broken in the 3.1.1 release: as a result, the %rflags of each process was corrupted, and segfaulted almost immediately. We fixed the bug in Solaris, so we would still work with 3.1.1. This was also fixed (restoring the original semantics) in Xen itself in time for the 3.1.2 release. So there's a small window (early Solaris xVM releases and community versions of Xen 3.1.1) where we're broken, but thankfully, we caught this pretty early. The lesson to be drawn? Clear documentation of the hypervisor ABI would have helped, I think.

6618391 64-bit xVM lets processes fiddle with kernelspace, but Xen bug saves us

Around the same time, I noticed during code inspection that we were still setting PT_USER in PTE entries on 64-bit. This had some nasty implications, but first, some background.

On 32-bit x86, Xen protects itself via segmentation: it carves out the top 64Mb, and refuses to let any of the domains load a segment selector that allows read or write access to that part of the address space. Each domain kernel runs in ring 1 so can't get around this. On 64-bit, this hack doesn't work, as AMD64 does not provide full support for segmentation (given what a legacy technique it is). Instead, and somewhat unfortunately, we have to use page-based permissions via the VM system. Since page table entries only have a single bit ("user/supervisor") instead of being able to say "ring 1 can read, but ring 3 cannot", the OS kernel is forced into ring 3. Normally, ring 3 is used for userspace code. So every time we switch between the OS kernel and userspace, we have to switch page tables entirely - otherwise, the process could use the kernel page tables to write to kernel address-space.

Unfortunately, this means that we have to flush the TLB every time, which has a nasty performance cost. To help mitigate this problem, in Xen 3.0.3, an incompatible change was made. Previously, so that the kernel (running in ring 3, remember) could access its address space, it had to set PT_USER int its kernel page table entries (PTEs). With 3.0.3, this was changed: now, the hypervisor would automatically do that. Furthermore, if Xen did see a PTE with PT_USER set, then it assumed this was a userspace mapping. Thus, it also set PT_GLOBAL, a hardware feature - if such a bit is set, then a corresponding TLB entry is not flushed. This meant that switching between userspace and the OS kernel was much faster, as the TLB entries for userspace were no longer flushed.

Unfortunately, in our kernel, we missed this change in some crucial places, and until we fixed the bug above, we were setting PT_USER even on kernel mappings. This was fairly obviously A Bad Thing: if you caught things just right, a kernel mapping would still be present in the TLB when a user-space program was running, allowing userspace to read from the kernel! And indeed, some simple testing showed this:

dtrace -qn 'fbt:genunix::entry /arg0 > `kernelbase/ { printf("%p ", arg0); }' | \\
    xargs -n 1 ~johnlev/bin/i386/readkern | while read ln; do echo $ln::whatis | mdb -k ; done

With the above use of DTrace, MDB, and a little program that attempts to read addresses, we can see output such as:

ffffff01d6f09c00 is ffffff01d6f09c00+0, allocated as a thread structure
ffffff01c8c98438 is ffffff01c8c983e8+50, bufctl ffffff01c8ebf8d0 allocated from as_cache
ffffff01d6f09c00 is ffffff01d6f09c00+0, allocated as a thread structure
ffffff01d44d7e80 is ffffff01d44d7e80+0, bufctl ffffff01d3a2b388 allocated from kmem_alloc_40
ffffff01d44d7e80 is ffffff01d44d7e80+0, bufctl ffffff01d3a2b388 allocated from kmem_alloc_40

Thankfully, the fix was simple: just stop adding PT_USER to our kernel PTE entries. Or so I thought. When I did that, I noticed during testing that the userspace mappings weren't getting PT_GLOBAL set after all (big thanks to MDB's ::vatopfn, which made this easy to see).

Yet more investigation revealed the problem to be in the hypervisor. Unlike certain other popular OSes used with Xen, we set PTE entries in page tables using atomic compare and swap operations. Remember that under Xen, page tables are read-only to ensure safety. When an OS kernel tries to write a PTE, a page fault happens in Xen. Xen recognises the write as an attempt to update a PTE and emulates it. However, since it hadn't been tested, this emulation path was broken: it wasn't doing the correct mangling of the PTE entry to set PT_GLOBAL. Once again, the actual fix was simple.

By the way, that same putback also had the implementation of:

6612324 ::threadlist could identify taskq threads

I'd been doing an awful lot of paging through ::threadlist output recently, and always having to jump through all the (usually irrelevant) taskq threads was driving me insane. So now you can just specify ::threadlist -t and get a much, much, shorter list.

Tags:

Tuesday Oct 23, 2007

OpenSolaris xVM now available in SX:CE

Build 75 of Solaris Express Community Edition is now out, and it includes our bits. So go ahead, install build 75, select the xVM entry in grub and play around! We're still working on updating the documentation on our community page; in the meantime, you have manpages - start at xVM(5) (and note that the forthcoming build 76 has much improved versions of those docs).

You might be wondering if your machine is capable of running Windows or other operating systems under HVM. Joe Bonasera has a simple program you can run that will tell you. Alternatively, if you're already running with our bits, running 'virt-install' will tell you - if it asks you about creating a fully-virtualized domain, then it should work, and you can end up with a desktop like Russell Blaine's.

Nils, meanwhile, describes how we've improved the RAS of the hypervisor by integrating it with Solaris crash dumps here. This feature has saved our lives numerous times during development as those of us who've done the "hex dump" debugging thing know very well.

Of course, we're not done yet - we have bugs to fix and rough edges to smooth out, and we have significant features to implement. One of the major items we're working on in the near future is the upgrade to Xen 3.1.1 (or possibly 3.1.2, depending on timelines!). This will give us the ability to do live migration of HVM domains, along with a host of other features and improvements.

Tags:

About

levon

Search

Categories
Archives
« April 2014
MonTueWedThuFriSatSun
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
    
       
Today