An Oracle blog about Solaris

Starting out with Solaris on Xen

Chris Beal
Senior Principal Software Engineer

As you may have seen from the announcement and John's blog we have a new set of Solaris on Xen bits available for download. A lot has changed in the (almost) year since the last drop. Certainly things are a lot easier set up than they were back then.

First big difference I notice is that you can install these bits straight from the DVD which means no mucking around with bfu.

Once it is installed also you have the joys of much newer Solaris builds including improvements to networking and removable media (but that isn't the point of this post).

Of course the thing you really want to do is run multiple operating systems so (while there are documents here I always think it's nice to see peoples use cases. Find out how they got things working.

I'm going to use zfs for storage so I made sure I had a large amount of space available for a zpool

# zpool create guests c2d0s7

First gotcha. After install the default boot entry in the grub menu.lst is for solaris on metal (ie not booting under Xen). You can change that before rebooting or select Solaris dom0 from the grub menu.

Check you are running under Xen by looking at uname -i

dominion# uname -i

(dominion is the name of my host)

If that says i86pc then you're not booted under Xen, i86xpv is the new platform modified to run on Xen.

I found that I accidentally booted on metal first time, and when I then booted under Xen the services weren't enabled. I had to manually enable them. (If you boot straight in to Dom 0 they start.

dominion# svcs -a | grep xctl
online 10:51:04 svc:/system/xctl/store:default
online 10:51:11 svc:/system/xctl/xend:default
online 10:51:11 svc:/system/xctl/console:default
online 10:51:16 svc:/system/xctl/domains:default

If it says anything other than online, enable them with

# svcadm enable "service name"

I use a zpool to create my disk devices for my domains. This has huge advantages, such as the ability to quickly snapshot a domain (say after install) so you can always return to that state. Also you can clone a snapshot so if you want to have many similar domains (say multiple solaris development environments) you can clone an install and then only the changes between the domains are stored (zfs being copy on write).

To set this up you need to create a zvol on your zpool

# zfs create -V 10G guests/solaris-pv

This creates a zvol of up to 10G in size. Unused space is still free for other users of the pool to allocate.

You can access the device for this zvol using


So that's simple - how do we install a Solaris domain? First off I create an install python config file. (Soon there will be a tool to manage the install for you but that's not really ready yet).

This python file describes some simple things about the domain like where the disk and cdrom is.

dominion# cat /guests/configs/solaris-pv-install.py 
name = "solaris-pv-install"
memory = "1024"
disk = [ 'file:/guests/isos/66-0613-nd.iso,6:cdrom,r', 'phy:/dev/zvol/dsk/guests/solaris-pv,0,w' ]
vif = [ '' ]
on_shutdown = 'destroy'
on_reboot = 'destroy'
on_crash = 'destroy'

Name is obvious, and I've copied the iso image to be a file to speed up install.

You can kick off the install just by starting the domain

dominion#  xm create -c /guests/configs/solaris-pv-install.py

This says start the domain and give me a serial console access to it. You then do a normal Solaris install. Once complete you should create a second python file to boot off the zvol. but first I'm going to snapshot it so I can quickly duplicate it (though I really should sys-unconfig it first to make me input the hostname and ip info again.)

dominion# zfs snapshot guests/solaris-pv@install
dominion# cat solaris-pv.py
name = "solaris-pv"
memory = "1024"
root = "/dev/dsk/c0d0s0"
disk = [ 'phy:/dev/zvol/dsk/guests/solaris-pv,0,w' ]
vif = [ '' ]
on_shutdown = 'destroy'
on_reboot = 'destroy'
on_crash = 'destroy'

and create it with

# xm create -c solaris-pv.py

This then comes up as per a normal solaris boot, if you've given it an ip address during the install or set it to use dhcp you should be able to log in to it using ssh. The networking is effectively bridged, that is to say, you need a real IP address for each domain on the same network as the Dom0.

So the next question I always get is "Can I run windows as a domU". And the answer is "maybe". What we have done up till now is use a paravirualised domU. That is one that has been modified to run on Xen. Anything that would trigger a privileged operation (interrupt, privileged instruction etc) is modified to be a call to the hypervisor. This is nice and fast, but some operating systems haven't had this treatment.

However with the advent of the intel core2duo and Rev F Opteron/Athlon64 (socket AM2) processors, some hardware support for virtualisation has been built in to the chip. This detects these privileged operations and redirects control back to the hypervisor to do "the right thing"

With Xen these are referred to as HVM domains.

Russ is going to be blogging more about these so I won't go in to too much detail, but if you want to know if your system is HVM capable, I wrote this simple program to tell you

dominion# cat hvm-capable.c 
#include < sys/types.h>
#include < sys/stat.h>
#include < fcntl.h>
#include < unistd.h>
#include < string.h>
#include < errno.h>
#include < stdio.h>
static const char devname[] = "/dev/cpu/self/cpuid";
main(int argc, char \*argv[])
struct {
uint32_t r_eax, r_ebx, r_ecx, r_edx;
} _r, \*rp = &_r;
int d;
char \*s;
int isamd = 0;
int isintel = 0;
if ((d = open(devname, O_RDONLY)) == -1) {
return (1);
if (pread(d, rp, sizeof (\*rp), 0) != sizeof (\*rp)) {
goto fail;
s = (char \*)&rp->r_ebx;
if (strncmp(s, "Auth" "cAMD" "enti", 12) == 0) {
if (pread(d, rp, sizeof (\*rp), 0x80000001) == sizeof (\*rp)) {
(void) printf ("processor is AMD ");
\* Read secure virtual machine bit
\* (bit 2 of ECX feature ID)
(void) close(d);
if ((rp->r_ecx >> 2) & 1) {
(void) printf("and processor supports SVM\\n");
return (0);
(void) printf("and does not support SVM\\n");
} else {
(void) printf ("error reading features register");
(void) close(d);
return (1);
} else if (strncmp(s, "Genu" "ntel" "ineI", 12) == 0) {
if (pread(d, rp, sizeof (\*rp), 0x00000001) == sizeof (\*rp)) {
(void) printf ("processor is Intel ");
\* Read VMXE feature bit
\* (bit 5 of ECX feature ID)
(void) close(d);
if ((rp->r_ecx >> 5) & 1) {
(void) printf("and processor supports VMX\\n");
return (0);
(void) printf("and does not support VMX\\n");
} else {
(void) printf ("error reading features register");
(void) close(d);
return (1);
(void) close(d);
return (1);

SVM is AMD's implementation of HVM while VMX is Intel's.

And just a teaser of what you can expect. (right click - view image to see it full size)

Here you see a solaris paravirtualized vm being installed, a windows vista hvm domain. In the top left corner you can see the virtual machine manager. A new management gui that will help manage domains.

Sorry this is going to be pretty hard to see unless you view the image in it's original size (1600x1200, yes virtualisation helps you use up those wasted resources including screen real estate)

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.