Thursday Dec 06, 2007

Xen compatibility with Solaris

Maintaining the compatibility of hardware virtualization solutions can be tricky. Below I'll talk about two bugs that needed fixes in the Xen hypervisor. Both of them have unfortunate implications for compatibility, but thankfully, the scope was limited.

6616864 amd64 syscall handler needs fixing for xen 3.1.1

Shortly after the release of 3.1.1, we discovered that all 64-bit processes in a Solaris domain would segfault immediately. After much debugging and head-scratching, I eventually found the problem. On AMD64, 64-bit processes trap into the kernel via the syscall instruction. Under Xen, this will obviously trap to the hypervisor. Xen then 'bounces' this back to the relevant OS kernel.

On real hardware, %rcx and %r11 have specific meanings. Prior to 3.1.1, Xen happened to maintain these values correctly, although the layout of the stack is very different from real hardware. This was broken in the 3.1.1 release: as a result, the %rflags of each process was corrupted, and segfaulted almost immediately. We fixed the bug in Solaris, so we would still work with 3.1.1. This was also fixed (restoring the original semantics) in Xen itself in time for the 3.1.2 release. So there's a small window (early Solaris xVM releases and community versions of Xen 3.1.1) where we're broken, but thankfully, we caught this pretty early. The lesson to be drawn? Clear documentation of the hypervisor ABI would have helped, I think.

6618391 64-bit xVM lets processes fiddle with kernelspace, but Xen bug saves us

Around the same time, I noticed during code inspection that we were still setting PT_USER in PTE entries on 64-bit. This had some nasty implications, but first, some background.

On 32-bit x86, Xen protects itself via segmentation: it carves out the top 64Mb, and refuses to let any of the domains load a segment selector that allows read or write access to that part of the address space. Each domain kernel runs in ring 1 so can't get around this. On 64-bit, this hack doesn't work, as AMD64 does not provide full support for segmentation (given what a legacy technique it is). Instead, and somewhat unfortunately, we have to use page-based permissions via the VM system. Since page table entries only have a single bit ("user/supervisor") instead of being able to say "ring 1 can read, but ring 3 cannot", the OS kernel is forced into ring 3. Normally, ring 3 is used for userspace code. So every time we switch between the OS kernel and userspace, we have to switch page tables entirely - otherwise, the process could use the kernel page tables to write to kernel address-space.

Unfortunately, this means that we have to flush the TLB every time, which has a nasty performance cost. To help mitigate this problem, in Xen 3.0.3, an incompatible change was made. Previously, so that the kernel (running in ring 3, remember) could access its address space, it had to set PT_USER int its kernel page table entries (PTEs). With 3.0.3, this was changed: now, the hypervisor would automatically do that. Furthermore, if Xen did see a PTE with PT_USER set, then it assumed this was a userspace mapping. Thus, it also set PT_GLOBAL, a hardware feature - if such a bit is set, then a corresponding TLB entry is not flushed. This meant that switching between userspace and the OS kernel was much faster, as the TLB entries for userspace were no longer flushed.

Unfortunately, in our kernel, we missed this change in some crucial places, and until we fixed the bug above, we were setting PT_USER even on kernel mappings. This was fairly obviously A Bad Thing: if you caught things just right, a kernel mapping would still be present in the TLB when a user-space program was running, allowing userspace to read from the kernel! And indeed, some simple testing showed this:

dtrace -qn 'fbt:genunix::entry /arg0 > `kernelbase/ { printf("%p ", arg0); }' | \\
    xargs -n 1 ~johnlev/bin/i386/readkern | while read ln; do echo $ln::whatis | mdb -k ; done

With the above use of DTrace, MDB, and a little program that attempts to read addresses, we can see output such as:

ffffff01d6f09c00 is ffffff01d6f09c00+0, allocated as a thread structure
ffffff01c8c98438 is ffffff01c8c983e8+50, bufctl ffffff01c8ebf8d0 allocated from as_cache
ffffff01d6f09c00 is ffffff01d6f09c00+0, allocated as a thread structure
ffffff01d44d7e80 is ffffff01d44d7e80+0, bufctl ffffff01d3a2b388 allocated from kmem_alloc_40
ffffff01d44d7e80 is ffffff01d44d7e80+0, bufctl ffffff01d3a2b388 allocated from kmem_alloc_40

Thankfully, the fix was simple: just stop adding PT_USER to our kernel PTE entries. Or so I thought. When I did that, I noticed during testing that the userspace mappings weren't getting PT_GLOBAL set after all (big thanks to MDB's ::vatopfn, which made this easy to see).

Yet more investigation revealed the problem to be in the hypervisor. Unlike certain other popular OSes used with Xen, we set PTE entries in page tables using atomic compare and swap operations. Remember that under Xen, page tables are read-only to ensure safety. When an OS kernel tries to write a PTE, a page fault happens in Xen. Xen recognises the write as an attempt to update a PTE and emulates it. However, since it hadn't been tested, this emulation path was broken: it wasn't doing the correct mangling of the PTE entry to set PT_GLOBAL. Once again, the actual fix was simple.

By the way, that same putback also had the implementation of:

6612324 ::threadlist could identify taskq threads

I'd been doing an awful lot of paging through ::threadlist output recently, and always having to jump through all the (usually irrelevant) taskq threads was driving me insane. So now you can just specify ::threadlist -t and get a much, much, shorter list.

Tags:

Tuesday Oct 23, 2007

OpenSolaris xVM now available in SX:CE

Build 75 of Solaris Express Community Edition is now out, and it includes our bits. So go ahead, install build 75, select the xVM entry in grub and play around! We're still working on updating the documentation on our community page; in the meantime, you have manpages - start at xVM(5) (and note that the forthcoming build 76 has much improved versions of those docs).

You might be wondering if your machine is capable of running Windows or other operating systems under HVM. Joe Bonasera has a simple program you can run that will tell you. Alternatively, if you're already running with our bits, running 'virt-install' will tell you - if it asks you about creating a fully-virtualized domain, then it should work, and you can end up with a desktop like Russell Blaine's.

Nils, meanwhile, describes how we've improved the RAS of the hypervisor by integrating it with Solaris crash dumps here. This feature has saved our lives numerous times during development as those of us who've done the "hex dump" debugging thing know very well.

Of course, we're not done yet - we have bugs to fix and rough edges to smooth out, and we have significant features to implement. One of the major items we're working on in the near future is the upgrade to Xen 3.1.1 (or possibly 3.1.2, depending on timelines!). This will give us the ability to do live migration of HVM domains, along with a host of other features and improvements.

Tags:

Tuesday Jul 31, 2007

Automatic start/stop of Xen domains

After answering a query, I said I'd write a blog entry describing what changes we've made to support clean shutdown and start of Xen domains.

Bernd refers to an older method of auto-starting Xen domains used on Linux. In fact, this method has been replaced with the configuration parameters on_xend_start and on_xend_stop. Setting these can ensure that a Xen domain is cleanly shut down when the host (dom0) is shut down, and started automatically as needed. For somewhat obvious reasons, we'd like to have the same semantics as used with zones, if not quite the same implementation (yet, at least).

When I started looking at this, I realised that the community solution had some problems:

Clean shutdown wasn't the default

It seems obvious that by default I'd like my operating systems to shut down cleanly. Only in unusual circumstances would I be happy with an OS being unceremoniously destroyed. We modified our Xen gate to default to on_xend_stop=shutdown.

Suspend on shutdown was dangerous

It is possible to specify on_xend_stop=suspend; this will save the running state to an image file and then destroy the domain (like xm save). However, there is not corresponding on_xend_start setting, nor any logic to ensure that the values match. This is both apparently useless and even dangerous, since starting a new domain but with old file-system state from a suspended domain could be problematic. We've disabled this functionality.

Actions are tied into xend

This was the biggest problem for us: as modelled, if somebody stops xend, then all the domains would be shut down. Similarly, if xend restarts for whatever reason (say, a hardware error), it would start domains again. We've modified this on Solaris. Instead of xend operating on these values, we introduce a new SMF service, system/xctl/domains, that auto-starts/stops domains as necessary. This service is pretty similar to system/zones. We've set up the dependencies such that a restart of the Xen daemons won't cause any running domains to be restarted. For this to work properly within the SMF framework, we also had to modify xend to wait for all domains to finish their state transitions.

You can find our changes here. And yes, we still need to take system/xctl/domains to PSARC.

Clean shutdown implementation

You might be wondering how the dom0 even asks the guest domains to shut down cleanly. This is done via a xenstore entry, control/shutdown. The control tools write a string into this entry, which is being "watched" by the domain. The kernel then reads the value and responds appropriately (xen_shutdown()), triggering a user-space script via the sysevent framework. If nothing happens for a while, it's possible that the script couldn't run for whatever reason. In that case, we time-out and force a "dirty" shutdown from within the kernel.

Tags:

Wednesday Jul 18, 2007

Solaris Xen update

After an undesirably long time, I'm happy to say that another drop of Solaris on Xen is available here. Sources and other sundry parts are here. Documentation can be found at our community site, and you can read Chris Beal describe how to get started with the new bits.

As you might expect, there's been a massive amount of change since the last OpenSolaris release. This time round, we are based on Xen 3.0.4 and build 66 of Nevada. As always, we'd love to hear about your experiences if you try it out, either on the mailing list or the IRC channel.

In many ways, the most significant change is the huge effort we've put in to stabilize our codebase; a significant number of potential hangs, crashes, and core dumps have been resolved, and we hope we're converging on a good-quality release. We've started looking seriously at performance issues, and filling in the implementation gaps. Since the last drop, notable improvements include:

PAE support
By default, we now use PAE mode on 32-bit, aiding compatibility with other domain 0 implementations; we also can boot under either PAE or non-PAE, if the Xen version has 'bi-modal' support. This has probably been the most-requested change missing from our last release.
HVM support
If you have the right CPU, you can now run fully-virtualized domains such as Windows using a Solaris dom0! Whilst more work is needed here, this does seem to work pretty well already. Mark Johnson has some useful tips on using HVM domains.
New management tools
We have integrated the virt- suite of management tools. virt-manager provides a simple GUI for controlling guest domains on a single host. virt-install and virsh are simple CLIs for installing and managing guest domains respectively. Note that parts of these tools are pre-alpha, and we still have a significant amount of work to do on them. Nonetheless, we appreciate any comments...
PV framebuffer
Solaris dom0 now supports the SDL-based paravirt framebuffer backend, which can be used with domUs that have PV framebuffer support.
Virtual NIC support
The Ethernet bridge used in the previous release has been replaced with virtual NICs from the Crossbow project. This enables future work around smart NICs, resource controls, and more.
Simplified Solaris guest domain install
It's now easy to install a new Solaris guest domain using the DVD ISO. The temporary tool in the last release, vbdcfg, has disappeared now as a result. William Kucharski has a walk-through.
Better SMF usage
Several of the xend configuration properties are now controlled using the SMF framework.
Managed domain support
We now support xend-managed domain configurations instead of using .py configuration files. Certain parts of this don't work too well yet (unfortunately all versions of Xen have similar problems), but we are plugging in the gaps here one by one.
Memory ballooning support
Otherwise known as support for dynamic xm mem-set, this allows much greater flexibility in partitioning the physical memory on a host amongst the guest domains. Ryan Scott has more details.
Vastly improved debugging support
Crash dump analysis and debugging tools have always been a critical feature for Solaris developers. With this release, we can use Solaris tools to debug both hypervisor crashes and problems with guest domains. I talk a little bit about the latter feature below.
xvbdb has been renamed
To simply be xdb. This was a very exciting change for certain members of our team.

We're still working hard on finishing things up for our phase 2 putback into Nevada (where "phase 1" was the separate dboot putback). As well as finishing this work, we're starting to look at further enhancements, in particular some features that are available in other vendors' implementations, such as a hypervisor-copy based networking device, blktap support, para-virtualized drivers for HVM domains (a huge performance fix), and more.

Debugging guest domains

Here I'll talk a little about one of the more minor new features that has nonetheless proven very useful. The xm dump-core command generates an image file of a running domain. This file is a dump of all memory owned by the running domain, so it's somewhat similar to the standard Solaris crash dump files. However, dump-core does not require any interaction with the domain itself, so we can grab such dumps even if the domain is unable to create a crash dump via the normal method (typically, it hangs and can't be interacted with), or something else prevents use of the standard Solaris kernel debugging facilities such as kmdb (an in-kernel debugger isn't very useful if the console is broken).

However, this also means that we have no control over the format used by the image file. With Xen 3.0.4, it's rather basic and difficult to work with. This is much improved in Xen 3.1, but I haven't yet written the support for the new format.

To add support for debugging such image files of a Solaris domain, I modified mdb(1) to understand the format of the image file (the alternative, providing a conversion step, seemed unneccessarily awkward, and would have had to throw away information!). As you can see if you look around usr/src/cmd/mdb in the source drop, mdb(1) loads a module called mdb_kb when debugging such image files. This provides simple methods for reading data from the image file. For example, to read a particular virtual address, we need to use the contents of the domain's page tables in the image file to resolve it to a physical page, then look up the location of that page in the file. This differs considerably from how libkvm works with Solaris crash dumps: there, we have a big array of address translations, which is used directly, instead of the page table contents.

In most other respects, debugging a kernel domain image is much the same as a crash dump:

# xm dump-core solaris-domu core.domu
# mdb core.domu
mdb: warning: dump is from SunOS 5.11 onnv-johnlev; dcmds and macros may not match kernel implementation
Loading modules: [ unix genunix specfs dtrace xpv_psm scsi_vhci ufs ... sppp ptm crypto md fcip logindmux nfs ]
> ::status
debugging domain crash dump core.domu (64-bit) from sxc16
operating system: 5.11 onnv-johnlev (i86pc)
> ::cpuinfo
 ID ADDR             FLG NRUN BSPL PRI RNRN KRNRN SWITCH THREAD           PROC
  0 fffffffffbc4b7f0  1b   40    9 169  yes   yes t-1408926 ffffff00010bfc80 sched
> ::evtchns
Type          Evtchn IRQ IPL CPU ISR(s)
evtchn        1      257 1   0   xenbus_intr
evtchn        2      260 9   0   xenconsintr
virq:debug    3      256 15  0   xen_debug_handler
virq:timer    4      258 14  0   cbe_fire
evtchn        5      259 5   0   xdf_intr
evtchn        6      261 6   0   xnf_intr
evtchn        7      262 6   0   xnf_intr
> ::cpustack -c 0
cbe_fire+0x5c()
av_dispatch_autovect+0x8c(102)
dispatch_hilevel+0x1f(102, 0)
switch_sp_and_call+0x13()
do_interrupt+0x11d(ffffff00010bfaf0, fffffffffbc86f98)
xen_callback_handler+0x42b(ffffff00010bfaf0, fffffffffbc86f98)
xen_callback+0x194()
av_dispatch_softvect+0x79(a)
dispatch_softint+0x38(9, 0)
switch_sp_and_call+0x13()
dosoftint+0x59(ffffff0001593520)
do_interrupt+0x140(ffffff0001593520, fffffffffbc86048)
xen_callback_handler+0x42b(ffffff0001593520, fffffffffbc86048)
xen_callback+0x194()
sti+0x86()
_sys_rtt_ints_disabled+8()
intr_restore+0xf1()
disp_lock_exit+0x78(fffffffffbd1b358)
turnstile_wakeup+0x16e(fffffffec33a64d8, 0, 1, 0)
mutex_vector_exit+0x6a(fffffffec13b7ad0)
xenconswput+0x64(fffffffec42cb658, fffffffecd6935a0)
putnext+0x2f1(fffffffec42cb3b0, fffffffecd6935a0)
ldtermrmsg+0x235(fffffffec42cb2b8, fffffffec3480300)
ldtermrput+0x43c(fffffffec42cb2b8, fffffffec3480300)
putnext+0x2f1(fffffffec42cb560, fffffffec3480300)
xenconsrsrv+0x32(fffffffec42cb560)
runservice+0x59(fffffffec42cb560)
queue_service+0x57(fffffffec42cb560)
stream_service+0xdc(fffffffec42d87b0)
taskq_d_thread+0xc6(fffffffec46ac8d0)
thread_start+8()

Note that both ::cpustack and ::cpuregs are capable of using the actual register set at the time of the dump (since the hypervisor needs to store this for scheduling purposes). You can also see the ::evtchns dcmd in action here; this is invaluable for debugging interrupt problems (and we've fixed a lot of those over the past year or so!).

Currently, mdb_kb only has support for image files of para-virtualized Solaris domains. However, that's not the only interesting target: in particular, we could support mdb in live crash dump mode against a running Solaris domain, which opens up all sorts of interesting debugging possibilities. With a small tweak to Solaris, we can support debugging of fully-virtualized Solaris instances. It's not even impossible to imagine adding Linux kernel support to mdb(1), though it's hard to imagine there would be a large audience for such a feature...

Tags:

Thursday May 24, 2007

Python and DTrace in build 65

A significant portion of the Xen control tools are written in Python, in particular xend. It's been somewhat awkward to observe what the daemon is doing at times, necessitating an endless cycle of 'add printf; restart' cycles. A while ago I worked on adding DTrace support to the Python packages we ship in OpenSolaris, and these changes have now made it into the latest build, 65.

As is the case with the other providers people have worked on such as Ruby and Perl, there's two simple probes for function entry and function exit. arg0 contains the filename, arg1 the function name, and arg2 has the line number. So given this simple script to trace the functions called by a particular function invocation, restricted to a given module name:

#!/usr/sbin/dtrace -ZCs

#pragma D option quiet

python$target:::function-entry
    /copyinstr(arg1) == $2 && strstr(copyinstr(arg0), $1) != NULL/ {
        self->trace = 1;
}

python$target:::function-return
    /copyinstr(arg1) == $2 && strstr(copyinstr(arg0), $1) != NULL/ {
        self->trace = 0;
}

python$target:::function-entry,python$target:::function-return
    /self->trace && strstr(copyinstr(arg0), $3) != NULL/ {
        printf("%s %s (%s:%d)\\n", probename == "function-entry" ? "->" : "<-",
            copyinstr(arg1), copyinstr(arg0), arg2);
}

We can run it as follows and get some useful results:

# ./pytrace.d \\"hg.py\\" \\"clone\\" \\"mercurial\\" -c 'hg clone /tmp/test.hg'
-> clone (build/proto/lib/python/mercurial/hg.py:65)
-> repository (build/proto/lib/python/mercurial/hg.py:54)
-> _lookup (build/proto/lib/python/mercurial/hg.py:31)
-> _local (build/proto/lib/python/mercurial/hg.py:16)
-> __getattribute__ (build/proto/lib/python/mercurial/demandload.py:56)
-> module (build/proto/lib/python/mercurial/demandload.py:53)
...

Of course, this being DTrace, we can tie all of this into general system activity as usual. I also added "ustack helper" support. This is significantly more tricky to implement, but enormously useful for following the path of Python code. For example, imagine we want to look at what's causing write()s in the clone operation above. As usual:

#!/usr/sbin/dtrace -Zs

syscall::write:entry /pid == $target/
{
        @[jstack(20)] = count();
}

END
{
        trunc(@, 2);
}

Note that we're using jstack() to make sure we have enough space allocated for the stack strings reported. Now as well as the C stack, we can see what Python functions are involved in the user stack trace:

# ./writes.d -c 'hg clone /tmp/test.hg'
...
              libc.so.1`_write+0x15
              libc.so.1`_fflush_u+0x36
              libc.so.1`fflush+0x43
              libpython2.4.so.1.0`file_flush+0x2a
              libpython2.4.so.1.0`call_function+0x32a
              libpython2.4.so.1.0`PyEval_EvalFrame+0xbdf
                [ build/proto/lib/python/mercurial/transaction.py:49 (add) ]
              libpython2.4.so.1.0`PyEval_EvalCodeEx+0x732
              libpython2.4.so.1.0`fast_function+0x112
              libpython2.4.so.1.0`call_function+0xda
              libpython2.4.so.1.0`PyEval_EvalFrame+0xbdf
                [ build/proto/lib/python/mercurial/revlog.py:1137 (addgroup) ]
              libpython2.4.so.1.0`PyEval_EvalCodeEx+0x732
              libpython2.4.so.1.0`fast_function+0x112
              libpython2.4.so.1.0`call_function+0xda
              libpython2.4.so.1.0`PyEval_EvalFrame+0xbdf
                [ build/proto/lib/python/mercurial/localrepo.py:1849 (addchangegroup) ]
              libpython2.4.so.1.0`PyEval_EvalCodeEx+0x732
              libpython2.4.so.1.0`fast_function+0x112
              libpython2.4.so.1.0`call_function+0xda
              libpython2.4.so.1.0`PyEval_EvalFrame+0xbdf
                [ build/proto/lib/python/mercurial/localrepo.py:1345 (pull) ]
              libpython2.4.so.1.0`PyEval_EvalCodeEx+0x732
              libpython2.4.so.1.0`fast_function+0x112
              148

              libc.so.1`_write+0x15
              libc.so.1`_fflush_u+0x36
              libc.so.1`fclose+0x6e
              libpython2.4.so.1.0`file_dealloc+0x36
              libpython2.4.so.1.0`frame_dealloc+0x65
              libpython2.4.so.1.0`PyEval_EvalCodeEx+0x75c
              libpython2.4.so.1.0`fast_function+0x112
              libpython2.4.so.1.0`call_function+0xda
              libpython2.4.so.1.0`PyEval_EvalFrame+0xbdf
                [ build/proto/lib/python/mercurial/localrepo.py:1849 (addchangegroup) ]
              libpython2.4.so.1.0`PyEval_EvalCodeEx+0x732
              libpython2.4.so.1.0`fast_function+0x112
              libpython2.4.so.1.0`call_function+0xda
              libpython2.4.so.1.0`PyEval_EvalFrame+0xbdf
                [ build/proto/lib/python/mercurial/localrepo.py:1345 (pull) ]
              libpython2.4.so.1.0`PyEval_EvalCodeEx+0x732
              libpython2.4.so.1.0`fast_function+0x112
              libpython2.4.so.1.0`call_function+0xda
              libpython2.4.so.1.0`PyEval_EvalFrame+0xbdf
                [ build/proto/lib/python/mercurial/localrepo.py:1957 (clone) ]
              libpython2.4.so.1.0`PyEval_EvalCodeEx+0x732
              libpython2.4.so.1.0`fast_function+0x112
              libpython2.4.so.1.0`call_function+0xda
              148

Creating a ustack helper

As anyone who's come across the Java dtrace helper source will know, creating a ustack helper is rather a black art.

When a ustack helper is present, it is called in-kernel for each entry in a stack when the ustack() action occurs (source). The D instructions in the helper action are executed such that the final string value is taken as the result of the helper. Typically for Java, there is no associated C function symbol for the PC value at that point in the stack, so the result of the helper is used directly in the stack trace. However, this is not true for Python, so that's why you see a different format above: the normal stack entry, plus the result of the helper in annotated form where it returned a result (in square brackets).

The helper is given two arguments: arg0 is the PC value of the stack entry, and arg1 is the frame pointer. The helper is expected to construct a meaningful string from just those values. In Python, the PyEval_EvalFrame function always has a PyFrameObject \* as one of its arguments. By having the helper look at this pointer value and dig around the structures, we can find pointers to the strings containing the file name and function, as well as the line number. We can copy these strings in, and, using alloca() to give ourselves some scratch space, build up the annotation string you see above.

Debugging helpers isn't particularly easy, since it lives and runs in probe context. You can use mdb's DTrace debugging facilities to find out what happened, and some careful mapping between the failing D instructions and the helper source can pinpoint the problem. Using this method it was relatively easy to get a working helper for x86 32-bit. Both SPARC and x86 64-bit proved more troublesome though. The problems were both related to the need to find the PyFrameObject \* given the frame pointer. On amd64, the function we needed to trace was passing the arguments in registers, as defined architecturally, so the argument wasn't accessible on the stack via the frame pointer. On SPARC, the pointer we need was stored in a register that was subsequently re-used as a scratch register. Both problems were solved, rather cheesily, by modifying the way the function was called.

Tags:

Wednesday Feb 21, 2007

Booting para-virtualised OS instances

Whilst I'm waiting for my home directory to reappear, I thought I'd mention some of the work I've done to support easy booting of domains in Xen.

For dom0 to be able to boot a para-virtualised domU, it needs to be able to bootstrap it. In particular, it needs to be able to read the kernel file and its associated ramdisk so it can hand off control to the kernel's entry point when the domain is created. And we must somehow make these files accessible in the dom0. Previously, you had to somehow copy out the files from the domU filesystem into dom0. This was often difficult (consider getting files off an ext2 filesystem in a Solaris dom0), and was obviously prone to errors such as forgetting to update the copies when upgrading the kernel.

For a while now Xen has had support for a bootloader. This runs in userspace and is responsible for copying out the files (that specified by kernel and ramdisk in the domain's config file) to a temporary directory in dom0; the files are then passed on to the domain builder. Xen has shipped with a bootloader called pygrub. Whilst somewhat confusingly named, it essentially emulated the grub menu. It had backends for a couple of Linux filesystems written in Python and worked by searching for a grub.conf file, then presenting a lookalike grub menu for the user to interact with. When an entry was selected, the specified files would be read off the filesystem and passed back to the builder.

This worked reasonably well for Linux, but we felt there was a number of problems. First, the interactive menu only worked for first boot; subsequent reboots would automatically choose an entry without allowing user interaction (though this is now fixed in xen-unstable). Its interactive nature seemed quite a stumbling block for things like remote domain management; you really don't want to babysit domain creation. Also, the implementation of the filesystem backends wasn't ideal; there was only limited Linux filesystem support, and it didn't work very well.

We've adapted pygrub to help with some of these issues. First, we replaced the filesystem code with a C library called libfsimage. The intention here is to provide a stable API for accessing filesystem images from userspace. Thus it provides a simple interface for reading files from a filesystem image and a plugin architecture to provide the filesystem support. This plugin API is also stable, allowing filesystems past, present and future to be transparently supported. Currently there are plugins for ext2, reiserfs, ufs and iso9660, and we expect to have a zfs plugin soon. We borrowed the grub code for all of these plugins to simplify the implementation, but the API allows for any implementation.

Some people were suggesting solutions involving loopback mounts. This was problematic for us for two main reasons. First, filesystem support in the different dom0 OS's is far from complete; for example, Solaris has no ext2 support, and Linux has no (real) ZFS support. Second, and more seriously, it exposes a significant gap in terms of isolation: the dom0 kernel FS code must be entirely resilient against a corrupt domU filesystem image. If we are to consider domU's as untrusted, it doesn't make sense to leave this open as an attack vector.

Another simple change we made was to allow operation without a grub.conf at all. You can specify a kernel and ramdisk and make pygrub automatically load them from the domU filesystem. Even easier, you can leave out all configuration altogether, and a Solaris domU will automatically boot the correct kernel and ramdisk. This makes setting up your config for a domU much easier.

pygrub understands both fdisk partitions and Solaris slices, so simply specifying the disk will cause the bootloader to look for the root slice and grab the right files to boot.

There's more work we can do yet, of course.

Tags:

Thursday Nov 23, 2006

64-bit Python in Nevada build 53

Coming to you in build 53 of OpenSolaris is 64-bit Python, which I worked on with Laszlo Peter of JDS fame. This means Python modules can make use of 64-bit versions of libraries, as well as 64-bit plugins. You can run the 64-bit version of Python via the path /usr/bin/amd64/python (on x86). This path isn't quite set in stone yet, so don't rely on it.

This facility didn't previously exist on any OS, so we had to make some innovations in terms of how Python lets modules build and load. In particular the Makefile used by Python previously hard-coded certain compiler flags etc. We had to make this dynamic. Also, we had to make some modifications to where Python looks for .so files when loading modules. Previously it would just assume that, say, /usr/lib/python2.4/foo.so was of the correct word size. Now, if it's running 64-bit, it will look for /usr/lib/python2.4/64/foo.so.

Similarly, building a Python module using the 64-bit Python will automagically install the .so file in the right place. Thanks to their architecture-independence, we don't need the same tricks for the .pyc files.

The need for this arose from the continuing work on Solaris dom0's running under Xen. The kernel/hypervisor interfaces provided are not 64-bit clean in the sense that 32-bit tools cannot deal with 64-bit domains; as a result, we need to run (the Python-based) xend as a native binary.

As an added bonus, Laca has also upgraded to Python 2.4.4, which finally enables the curses module on Solaris; also fixed are some niggling problems with accidental regeneration of .pyc files.

Tags:

Friday Jul 14, 2006

Save/restore of MP Solaris domUs

In honour of our new release of OpenSolaris on Xen, here's some details on the changes I've made to support save/resume (and hence migration and live migration) with MP Solaris domUs. As before, to actually see the code I'm describing, you'll need to download the sources - sorry about that.

Under Xen, the suspend process is somewhat unusual, in that only the CPU context for (virtual) CPU0 is stored in the state file. This implies that the actual suspend operation must be performed on CPU0, and we have to find some other way of capturing CPU context (that is, the register set) for the other CPUs.

In the Linux implementation, Xen suspend/resume works by using the standard CPU hotplug support for all CPUs other than CPU0. Whilst this works well for Linux, this approach is more troublesome for Solaris. Hot-unplugging the other CPUs doesn't match well with the mapping between Xen and Solaris notions of "offline" CPUs (the interested can read the big comment on line 406 of usr/src/uts/i86xen/os/mp_xen.c for a description of how this mapping currently works). In particular, offline CPUs on Solaris still participate in IPI interrupts, whilst a "down" VCPU in Xen cannot.

In addition, the standard CPU offlining code in Solaris is not built for this purpose; for example, it will refuse to offline a CPU with bound threads, or the last CPU in a partition.

However, all we really need to do is get the other CPUs into a known state which we can recover during the resume process. All the dispatcher data structures etc. associated with the CPUs can remain in place. To this end, we can use pause_cpus() on the other CPUs. By replacing the pause handler with a special routine (cpu_pause_suspend()), we can store the CPU context via a setjmp(), waiting until all CPUs have reached the barrier. We need to disable interrupts (or rather, Xen's virtualized equivalent of interrupts), as we have to tear down all the interrupts as part of the suspend process, and we need to ensure none of the CPUS go wandering off.

Once all CPUs are blocked at the known synchronisation point, we can tell Xen to "down" the other VCPUs so they can no longer run, and complete the remaining cleanup we need to do before we tell Xen we're ready to stop via HYPERVISOR_suspend().

On resume, we will come back on CPU0, as Xen stored the context for that CPU itself. After redoing some of the setup we tore down during suspend, we can move on to resuming the other CPUs. For each CPU, we call mach_cpucontext_restore(). We use the same Xen call used to create the CPUs during initial boot. In this routine, we fiddle a little bit with the context saved in the jmpbuf by setjmp(); because we're not actually returning via a normal longjmp() call, we need to emulate it. This means adjusting the stack pointer to simulate a ret, and pretending we've returned 1 from setjmp() by setting the %eax or %rax register in the context.

When each CPU's context is created, it will look as if it's just returned from the setjmp() in cpu_pause_suspend(), and will continue upon its merry way.

Inevitably, being a work-in-progress, there are still bugs and unresolved issues. Since offline CPUs won't participate in a cpu_pause(), we need to make sure that those CPUs (which will typically be sitting in the idle loop) are safe; currently this isn't being done. There are also some open issues with 64-bit live migration, and suspending SMP domains with virtual disks, which we're busy working on.

Tags:

Thursday Apr 13, 2006

Fun with stack corruption

Today, we were seeing a very odd crash in some xen code. The core dump wasn't of great use, since both %eip and %ebp were zeroed, which means no backtrace. Instead I attached mdb to the process and started stepping through to see what was happening. It soon transpired that we were crashing after successfully executing a C function called xspy_introduce_domain(), but before we got back to the Python code that calls into it. After a little bit of head-scratching, I looked closer at the assembly for this function:

xs.so`xspy_introduce_domain:            pushl  %ebp
xs.so`xspy_introduce_domain+1:          movl   %esp,%ebp
...
xs.so`xspy_introduce_domain+0x57:       subl   $0xc,%esp
xs.so`xspy_introduce_domain+0x5a:       leal   -0xc(%ebp),%eax
xs.so`xspy_introduce_domain+0x5d:       pushl  %eax
xs.so`xspy_introduce_domain+0x5e:       leal   -0x8(%ebp),%eax
xs.so`xspy_introduce_domain+0x61:       pushl  %eax
xs.so`xspy_introduce_domain+0x62:       leal   -0x2(%ebp),%eax
xs.so`xspy_introduce_domain+0x65:       pushl  %eax
xs.so`xspy_introduce_domain+0x66:       pushl  $0xc4b13114
xs.so`xspy_introduce_domain+0x6b:       pushl  0xc(%ebp)
xs.so`xspy_introduce_domain+0x6e:       call   +0x43595220      
...
xs.so`xspy_introduce_domain+0x11f:      leave
xs.so`xspy_introduce_domain+0x120:      ret

Seems OK - we're pushing three pointers onto the stack (+0x5a-0x65) and two other arguments. Let's look at the sources:

static PyObject \*xspy_introduce_domain(XsHandle \*self, PyObject \*args)
{
    domid_t dom;
    unsigned long page;
    unsigned int port;

    struct xs_handle \*xh = xshandle(self);
    bool result = 0;

    if (!xh)
        return NULL;
    if (!PyArg_ParseTuple(args, "ili", &dom, &page, &port))
        return NULL;

Looking up the definition of PyArg_ParseTuple() gave me the clue as to the problem. The format string specifies that we're giving the addresses of an int, long, and int. Yet in the assembly, the offsets of the leal instructions indicate we're pushing addresses to two 32-bit storage slots, and one 16-bit slot. So when PyArg_ParseTuple() writes its 32-bit quantity, it's going to overwrite two more bytes than it should.

As it happens, we're at the very top of the local stack storage space (-0x2(%ebp)). So those two bytes actually end up over-writing the bottom two bytes of the old %ebp we pushed at the start of the function. Then we pop that corrupted value back into the %ebp register via the leave. This has no effect until our caller calls leave itself. We move %ebp into %esp, then attempt to pop from the top of this stack into %ebp again. As it happens, the memory pointed to by the corrupt %ebp is zeroed; thus, we end up setting %ebp to zero. Finally, our caller does a ret, which pops another zero, but this time into %eip. Naturally this isn't a happy state of affairs, and we find ourselves with a core dump as described earlier.

Presumably this bug happened in the first place because someone didn't notice that domid_t was a 16-bit quantity. What's amazing is that nobody else has been hitting this problem!

Tags:

Tuesday Feb 14, 2006

A brief tour of i86xen

In this post, I'm going to give a quick walk through the major changes we've made so far in doing our port of Solaris to the Xen "platform". As we've only supplied a tarball of the source tree so far, I can't hyperlink to the relevant bits - sorry about that. As our code is still under heavy development, you can expect some of this code organisation to change significantly; nonetheless I thought this might be useful for those interested in peeking into the internals of what we've done so far.

As you might expect, the vast majority of the changes we've made reside in the kernel. To support booting Solaris under Xen (both domU and dom0, though as we've said the latter is still in the very early stages of development), we've introduced a new platform based on i86pc called i86xen. Wherever possible, we've tried to share common code by using i86pc's sources. There's still some cleanup we can do in this area.

Within usr/src/uts/i86xen, there are a number of Xen-specific source files:

io/psm/
Contains the PSM ("Platform-Specific Module") module for Xen. This mirrors the PSM provided by i86pc, but deals with the hypervisor-provided features such as the clock timer and the events system.
io/xendev/
This contains the virtual root nexus driver "xendev". All of the virtual frontend drivers are connected to this.
io/xvbd/
The virtual block driver. It's currently non-functional with the version of Xen we're working with; we're working hard on getting it functional.
os/
The guts of the kernel/hypervisor code. Amongst other things, it provides interfaces for dealing with events in evtchn.c and hypervisor_machdep.c (the hypervisor version of virtual interrupts, which hook into Solaris's standard interrupt system), the grant table in gnttab.c (used for providing access/transfer of pages between frontend and backend, suspend/resume in xen_machdep.c, and support routines for the debugger and the MMU code (mach_kdi.c and xen_mmu.c respectively).

As mentioned we use the i86pc code where possible, occasionally using #ifdefs where minor differences are found. In particular we re-use the i86pc HAT (MMU management) code found in i86pc/vm. You can also find code for the new boot method described by Joe Bonasera in i86pc/dboot and i86pc/boot.

A number of drivers that are needed by Xen but aren't i86xen specific live under usr/src/uts/common:

common/io/xenbus_\*.c common/io/xenbus/
"xenbus" is a simple transport for configuration data provided by domain0; for example, it provides a node control/shutdown which will notify the domainU that the user has requested the domain to be shutdown (or suspended) from domain0's management tools. This code provides this support.
common/io/xencons/
The virtual console frontend driver.
common/io/xennetf/
The virtual net device frontend driver.

As you might expect, the userspace changes we've needed to make so far have been reasonably minimal. Despite supporting the new i86xen platform definition, the only significant changes have been to usr/src/cmd/mdb/, where we've added some changes to better support debugging of the Xen-style x86 MMU.

Tags:

Monday Feb 13, 2006

Live migration of Solaris instances

Today we released our current source tree for our Solaris Xen port; for more details and the downloads see the Xen community on OpenSolaris.

One of the most useful features of Xen is its ability to package up a running OS instance (in Xen terminology, a "domainU", where "U" stands for "unprivileged"), plus all of its state, and take it offline, to be resumed at a later time. Recently we performed the first successful live migration of a running Solaris instance between two machines. In this blog I'll cover the various ways you can do this.

Para-virtualisation of the MMU

Typical "full virtualisation" uses a method known as "shadow page tables", whereby two sets of pagetables are maintained: the guest domain's set, which aren't visible to the hardware via cr3, and page tables visible to the hardware which are maintained by the hypervisor. As only the hypervisor can control the page tables the hardware uses to resolve TLB misses, it can maintain the virtualisation of the address space by copying and validating any changes the guest domain makes to its copies into the "real" page tables.

All these duplicates pages come at a cost of course. A para-virtualisation approach (that is, one where the guest domain is aware of the virtualisation and complicit in operating within the hypervisor) can take a different tack. In Xen, the guest domain is made aware of a two-level address system. The domain is presented with a linear set of "pseudo-physical" addresses comprising the physical memory allocated to the domain, as well as the "machine" addresses for each corresponding page. The machine address for a page is what's used in the page tables (that is, it's the real hardware address). Two tables are used to map between pseudo-physical and machine addresses. Allowing the guest domain to see the real machine address for a page provides a number of benefits, but slightly complicates things, as we'll see.

Save/Restore

The simplest form of "packaging" a domain is suspending it to a file in the controlling domain (a privileged OS instance known as "domain 0"). A running domain can be taken offline via an xm save command, then restored at a later time with xm restore, without having to go through a reboot cycle - the domain state is fully restored.

xm save xen-7 /tmp/domain.img

An xm save notifies the domain to suspend itself. This arrives via the xenbus watch system on the node control/shutdown, and is handled via xen_suspend_domain(). This is actually remarkably simple. First we leverage Solaris's existing suspend/resume subsystem, CPR, to iterate through the devices attached to the domain's device nexus. This calls each of the virtual drivers we use (the network, console, and block device frontends) with a DDI_SUSPEND argument. The virtual console, for example, simply removes its interrupt handler in xenconsdetach(). As a guest domain, this tears down the Xen event channel used to communicate with the console backend. The rest of the suspend code deals with tearing down some of the things we use to communicate with the hypervisor and domain 0, such as the grant table mappings. Additionally we convert a couple of stored MFN (the frame numbers of machine addresses) values into pseudo-physical PFNs. This is because the MFNs are free to change when we restore the guest domain; as the PFNs aren't "real", they will stay the same. Finally we call HYPERVISOR_suspend() to call into the hypervisor and tell it we're ready to be suspended.

Now the domain 0 management tools are ready to checkpoint the domain to the file we specified in the xm save command. Despite the name, this is done via xc_linux_save(). Its main task is to convert any MFN values that the domain still has into PFN values, then write all its pages to the disk. These MFN values are stored in two main places; the PFN->MFN mapping table managed by the domain, and the actual pages of the page tables.

During boot, we identified which pages store the PFN->MFN table (see xen_relocate_start_info()), and pointed to that structure in the "shared info" structure, which is shared between the domain and the hypervisor. This is used to map the table in xc_linux_save().

The hypervisor keeps track of which pages are being used as page tables. Thus, after domain 0 has mapped the guest domain's pages, we write out the page contents, but modify any pages that are identified as page tables. This is handled by canonicalize_pagetable(); this routine replaces all PTE entries that contain MFNs with the corresponding PFN value.

There are a couple of other things that need to be fixed too, such as the GDT.

xm restore /tmp/domain.img

Restoring a domain is essentially the reverse operation: the data for each page is written into one of the machine addresses reserved for the "new" domain; if we're writing a saved page table, we replace each PTE's PFN value with the new MFN value used by the new instance of the domain.

Eventually the restored domain is given back control, coming out from the HYPERVISOR_suspend() call. Here we need to rebuild the event channel setup, and anything else we tore down before suspending. Finally, we return back from the suspend handler and continue on our merry way.

Migration

xm migrate xen-7 remotehost

A normal save/restore cycle happens on the same machine, but migrating a domain to a separate machine is a simple extension of the process. Since our save operation has replaced any machine-specific frame number value with the pseudo-physical frames, we can easily do the restore on a remote machine, even though the actual hardware pages given to the domainU will be different. The remote machine must have the Xen daemon listening on the HTTP port, which is a simple change in its config file. Instead of writing each page's contents to a file, we can transmit it across HTTP to the Xen daemon running on a remote machine. The restore is done on that machine in the same manner as described above.

Live Migration

xm migrate --live xen-7 remotehost

The real magic happens with live migration, which keeps the time the domain isn't kept running to a bare minimum (on the order of milliseconds). Live migration relies on the empirically observed data that an OS instance is unlikely to modify a large percentage of its pages within a certain time frame; thus, by iteratively copying over modified domain pages, we'll eventually reach a point where the remaining data to be copied is small enough that the actual downtime for a domainU is minimal.

In operation, the domain is switched to use a modified form of the shadow page tables described above, known as "log dirty" mode. In essence, a shadow page table is used to notify the hypervisor if a page has been written to, by keeping the PTE entry for the page read-only: an attempt to write to the page causes a page fault. This page fault is used to mark the domain page as "dirty" in a bitmap maintained by the hypervisor, which then fixes up the domain's page fault and allows it to continue.

Meanwhile, the domain management tools iteratively transfers unmodified pages to the remote machine. It reads the dirty page bitmap and re-transmits any page that has been modified since it was last sent, until it reaches a point where it can finally tell the domain to suspend, and switch over to running it on the remote machine. This process is described in more detail in Live Migration of Virtual Machines.

Whilst transmitting all the pages takes a while, the actual time between suspension and resume is typically very small. Live migration is pretty fun to watch happen; you can be logged into the domain over ssh and not even notice that the domain has migrated to a different machine.

Further Work

Whilst live migration is currently working for our Solaris changes, there's still a number of improvements and fixes that need to be made.

On x86, we usually use the TSC register as the basis for a high-resolution timer (heavily used by the microstate accounting subsystem). We don't directly use any virtualisation of the TSC value, so when we restore a domain, we can see a large jump in the value, or even see it go backwards. We handle this OK (once we fixed bug 6228819 in our gate!), but don't yet properly handle the fact that the relationship between TSC ticks and clock frequency can change between a suspend and resume. This screws up our notion of timing.

We don't make any effort to release physical pages that we're not currently using. This makes suspend/resume take longer than it should, and it's probably worth investigating what can be done here.

Currently many hardware-specific instructions and features are enabled at boot by patching in instructions if we discover the CPU supports it. For example we discovered a domain that died badly when it was migrated to a host that didn't support the sfence instruction. If such a kernel is migrated to a machine with different CPUs, the domain will naturally fail badly. We need to investigate preventing incompatible migrations (the standard Xen tools currently do no verification), and also look at whether we can adapt to some of these changes when we resume a domain.

Tags:

Monday Jan 16, 2006

Generating assembly structure offset values with CTF

The Solaris kernel contains a fair amount of assembly, and this often needs to access C structures (and in particular know the size of such structures, and the byte offsets of their members). Since the assembler can't grok C, we need to provide constant values for it to use. This also applies to the C library and kmdb.

In the kernel, the header assym.h provides these values; for example:

#define T_STACK 0x4
#define T_SWAP  0x68
#define T_WCHAN 0x44

These values are the byte offset of certain members into struct _kthread. For each of the types we want to reference from assembly, a template is provided in one of the offsets.in files. For the above, we can see in usr/src/uts/i86pc/ml/offsets.in:

_kthread        THREAD_SIZE
        t_pcb                   T_LABEL
        t_lock
        t_lockstat
        t_lockp
        t_lock_flush
        t_kpri_req
        t_oldspl
        t_pri
        t_pil
        t_lwp
        t_procp
        t_link
        t_state
        t_mstate
        t_preempt_lk
        t_stk                   T_STACK
        t_swap
        t_lwpchan.lc_wchan      T_WCHAN
        t_flag                  T_FLAGS

This file contains structure names as well their members. Each of the members listed (which do not have to be in order, nor does the list need to be complete) cause a define to be generated; by default, an uppercase version of the member name is used. As can be seen, this can be overridden by specifying a #define name to be used. The THREAD_SIZE define corresponds to the bytesize of the entire structure (it's also possible to generate a "shift" value, which is log2(size)).

To generate the header with the right offset and size values we need, a script is used to generate CTF data for the needed types, which then uses this data to output the assym.h header. This is a Perl script called genoffsets, and the build invokes it with a command line akin to:

genoffsets -s ctfstabs -r ctfconvert cc < offsets.in > assym.h

The hand-written offsets.in file serves as input to the script, and it generates the header we need. The script takes the following steps:

  1. Two temporary files are generated from the input. One is a C file consisting of #includes and any other pre-processor directives. The other contains the meat of the offsets file.
  2. The C file containing all the includes is built with the compile line given (I have stripped the compiler options above for readability).
  3. ctfconvert is run on the built .o file.
  4. The pre-processor is run across the second file (the temporary offsets file)
  5. This pre-processed file is passed to ctfstabs along with the .o file.

ctfstabs reads the input offsets file, and for each entry, looks up the relevant value in the CTF data contained in the .o file passed to it. It has two output modes (which I'll come to shortly), and in this case we are using the genassym driver to output the C header. As you can see, this is a fairly simple process of processing each line of the input and looking up the type data in the CTF contained in the .o file.

A similar process is used for generating forth debug files for use when debugging the kernel via the SPARC PROM. This takes a different format of offsets file more appropriate to generating the forth debug macros, described in the forth driver.

To finish off the output header, the output from a small program called genassym (or, on SPARC, genconst) is appended. It contains a bunch of printfs of constants. A lot of those don't actually need to be there since they're simple constant defines, and the assembly file could just include the right header, but others are still there for reasons such as:

  • The macros which hide assembler syntax differences such as _MUL aren't implemented for the C compiler
  • The value is an enum type, which ctfstabs doesn't support
  • The constant is a complicated composed macro that the assembler can't grok

and other reasons. Whilst a lot of these could be cleaned up and removed from these files, it's probably not worth the development effort except as a gradual change.

Tags:

Saturday Nov 19, 2005

Resource management of services

SMF introduced the notion of a service as a first-order object in the Solaris OS. Thus, you have administration interfaces capable of dealing with services (as opposed to the implicit service represented by a set of processes, for example). It doesn't seem very well known, but as Stephen Hahn mentions, this also applies to the resource management facilities of Solaris.

A service can be bound to a project (as well as a resource pool, which I won't go into here). This allows us to add resource controls to the project which will apply to the service as a whole, which is significantly more reliable and usable than trying to deal with individual daemons etc. Unfortunately, it's not as obvious to set up as it should be (of which more later), so here's a simple walkthrough.

We're going to set up a simple 'forkbomb' service, which simply runs this program:

#include <unistd.h>
#include <stdlib.h>

int main()
{
        int first = 1;
        while (1) {
                if (fork() > 0 && first)
                        exit(0);
                first = 0;
        }
}

If you try running this program in an environment lacking resource controls, don't expect to be able to do much to your box except reboot it. Note the first parent does an exit(0) so that SMF doesn't think the service has failed (since we'll be a standard contract service). Here's the SMF manifest for our service:

<?xml version="1.0"?>
<!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1">
<service_bundle type='manifest' name='forkbomb'>
<service name='application/forkbomb' type='service' version='1'>
        <exec_method
            type='method'
            name='start'
            exec='/opt/forkbomb/bin/forkbomb'
            timeout_seconds='10'>
                <method_context project='forkbomb'>
                        <method_credential user='root' />
                </method_context>
        </exec_method>

        <exec_method
            type='method'
            name='stop'
            exec=':kill'
            timeout_seconds='10'>
      
        <instance name='default' enabled='false' />
</service>
</service_bundle>

Note that as well as setting the project in the method context, we've set a method credential; this is a workaround for a problem I'll come to later. Now we need to create the 'forkbomb' project for the service:

# projadd -K 'project.max-lwps=(privileged,100,deny)' forkbomb

Alternatively we could create a new user for the service to use, set the method credential to use that user, then change our 'forkbomb' project to allow the user to join it. It's important to note that this still works even for root, though, so that's what we've done here.

Finally, we can import the manifest as a service, then temporarily enable it (so it won't start next time we boot!):

# svccfg import /opt/forkbomb/manifest/forkbomb.xml
# svcadm enable -t forkbomb

The forkbomb is now running flat out, but under the constraints of the resource controls we set on its project. Thus we still have a running system, and have enough resources to disable our 'mis-behaving' service. Let's have a look at prstat:

Total: 148 processes, 266 lwps, load averages: 68.06, 20.50, 10.75
   PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/NLWP
 21145 root      992K  244K run      1    0   0:00:03 1.4% forkbomb/1
 21132 root      992K  244K run     49    0   0:00:03 1.2% forkbomb/1
 21128 root      992K  244K run     31    0   0:00:03 1.1% forkbomb/1
 21113 root      992K  244K run     31    0   0:00:03 1.1% forkbomb/1
 21176 root      992K  244K run     33    0   0:00:03 1.1% forkbomb/1
 21124 root      992K  244K run     53    0   0:00:03 1.1% forkbomb/1
 21119 root      992K  244K run     52    0   0:00:03 1.1% forkbomb/1
 21156 root      992K  244K run     53    0   0:00:03 1.0% forkbomb/1
 21088 root      992K  244K run     52    0   0:00:03 1.0% forkbomb/1
 21136 root      992K  244K run     43    0   0:00:03 1.0% forkbomb/1
 21133 root      992K  244K run     44    0   0:00:03 1.0% forkbomb/1
 21097 root      992K  244K run     52    0   0:00:03 1.0% forkbomb/1
 21103 root      992K  244K run     56    0   0:00:03 1.0% forkbomb/1
 21092 root      992K  244K run     52    0   0:00:03 1.0% forkbomb/1
 21183 root      992K  244K run     53    0   0:00:03 1.0% forkbomb/1
PROJID    NPROC  SIZE   RSS MEMORY      TIME  CPU PROJECT
   100      100   97M   24M   0.6%   0:04:47  95% forkbomb
     1        5   11M 8268K   0.3%   0:00:00 0.0% user.root
    10        3   18M 8060K   0.3%   0:00:00 0.0% group.staff
     0       40  135M   83M   2.6%   0:00:17 0.0% system
Total: 148 processes, 266 lwps, load averages: 70.60, 21.80, 11.24

As we might expect, there's a high system load (since our fork-bomb is ignoring the errors from fork() when it hits its resource limit). Note that the 'forkbomb' project has been clamped to a maximum of 100 LWPs, as you can see in the NPROC field. But most importantly, the system is still usable, and we can stop the troublesome service:

# svcadm disable forkbomb

After a while for the stop method to finish (or time out, both of which will kill all processes in the service contract), we're done!

I mentioned above that we needed to specify a method credential to work around a bug. This is bug 5093847. The way the property lookup works currently, if the use_profile property on the service isn't found, then none of the rest of the method context is examined. Setting the method credential has the side-effect of creating this property, so things work properly. This bug would also be nice to fix since we could directly set the project property via svccfg if the properties for the method context were always created. Any interested parties are strongly encouraged to have a go at fixing it - it's not currently being worked on, and I'd happy to help :)

Tags:

Friday Nov 18, 2005

Reducing CTF overhead

CTF (Compact C Type Format) encapsulates a reduced form of debugging information similar to DWARF and the venerable stabs. It describes types (structures, unions, typedefs etc.) and function prototypes, and is carefully designed to take a minimum of space in the ELF binaries. The kernel binaries that Sun ship have this data embedded as an ELF section (.SUNW_ctf) so that tools like mdb and dtrace can understand types. Of course, it would have been possible to use existing formats such as DWARF, but they typically have a large space overhead and are more difficult to process.

The CTF data is built from the existing stabs/DWARF data generated by the compiler's -g option, and replaces this existing debugging information in the output binary (ctfconvert performs this job).

For the sake of kmdb and crash dumps, the CTF data for each kernel binary is present in the memory image of a booted kernel. This implies it's paramount that the amount of CTF data is minimised. Since each kernel module will have references to common types such as cpu_t, there's a lot of duplicated type data in all the CTF sections. To help avoid this duplication, the kernel build uses a process known rather fancifully as 'uniquification'.

Uniquification

Each type in the CTF data has an integer ID associated with it. Observe that the main genunix kernel module has a large number of the common types I mention above in its CTF data. We can remove the duplicate data found in other modules by replacing the type data with references to the type data in CTF. This process is uniquification. Consider the bmc driver. After building and linking the bmc object, we want to add CTF for its types, but we also uniquify against the genunix binary, like so:

ctfmerge -L VERSION -d ../../intel/genunix/debug64/genunix -o debug64/bmc debug64/bmc_fe.o debug64/bmc_kcs.o

This command takes the CTF data in the objects comprising bmc (previously converted from stabs/DWARF by ctfconvert) and merges them together (removing any shared duplicates between the two different objects). Then it passes through this CTF data, and looks for any types that match ones in the uniqfile (which we specified with the -d option). For each matching type (for example, cpu_t), we replace any references to the local type definition with a reference to genunix's copy of the type data. Remember that type references are simply integer IDs, so this is just a matter of changing the type ID to the one found in genunix's CTF. Let's use ctfdump to look at the results:

$ ctfdump $SRC/uts/i86pc/bmc/debug64/bmc >bmc.ctf
$ ggrep -C2 bmc_kcs_send bmc.ctf
- Types ----------------------------------------------------------------------

  <32769> STRUCT bmc_kcs_send (3 bytes)
        fnlun type=113 off=0
        cmd type=113 off=8
        data type=5287 off=16
...

Here we see the first member of the struct bmc_kcs_send has a type ID of 113. Since this type ID isn't in the CTF, it must belong to our parent. We look for our parent, then find the type ID we're looking for:

$ grep cth_parname bmc.ctf
  cth_parname  = genunix
$ ctfdump $SRC/uts/intel/genunix/debug64/genunix >genunix.ctf
$ grep '<113>' genunix.ctf
  <113> TYPEDEF uint8_t refers to 86

This manual process is similar to how the CTF lookup actually happens. This uniquification process saves us a significant amount of CTF data, although it causes us some problems, which we'll discuss next.

CTF labels and additive merges

As noted above, all our uniquified modules will have type ID's that refer to the genunix shipped along with them. This means, of course, that if any of the types in genunix itself changes without these modules changing too, all the type references to genunix types will be wrong, since it works by type ID. So, what happens when we need to release kernel changes?

Since we obviously don't want to ship all these modules every time genunix needs to change, we have to keep the existing type IDs in the new genunix binary. But also, we want to have any new or changed types present and correct too. So, instead of doing a full merge and rewriting the existing CTF data in genunix, we perform an "additive merge". This retains the existing CTF types (and IDs) so that references from unchanged modules still point to the right types, and adds on new types.

To do an additive merge, we need to pass a 'withfile' to ctfmerge via its -w option. This first takes all the CTF in the withfile and adds it into the output CTF. Then the CTF from the objects passed to ctfmerge are uniquified against this data. Any remaining types after uniquification are then added on top of the withfile data. This preserves the existing type IDs for any older modules that uniquified against this genunix, whilst also adding the new types.

This 'withfile' is the previous version of genunix. When it was built the first time, we passed -L VERSION to ctfmerge. This adds a label with the value of the environment variable $VERSION. Typically this is something like Generic. When we do the additive merge, we pass in a different label equal to the patch ID of the build, and the additional types are marked with this label. For example, on a Solaris 9 system's genunix:

- Label Table ----------------------------------------------------------------

   5001 Generic
   5981 112233-12
...

Labels are nothing but a mapping from a string to a particular type ID. So here we see that the original types are numbered from 1 to 5001, and we've done an additive merge on top with the label "112233-12", which added more types.

CTF from the ip module

The genunix module contains many common types, but the ip module also contains a lot of types used by many kernel modules, but not found in genunix. To further reduce the amount of CTF in these modules, we merge in the CTF data found in ip into the genunix CTF. The modules can then uniquify against this combined data, removing many more duplicate types. Note that we don't do this for patch builds, as the ip module might not ship in a patch. Unfortunately this can cause problems (notably bug 6347000, though this isn't yet accessible from opensolaris.org).

Further reading

Tags:

Tuesday Jan 04, 2005

VisitorVille

According to VisitorVille, nearly half of web users within Sun are using Windows. Seems a bit suspect.
About

levon

Search

Categories
Archives
« April 2014
MonTueWedThuFriSatSun
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
    
       
Today