Thursday Mar 31, 2011


that's what this is.

Tuesday Aug 25, 2009

solaris domU hvm pv disk io support

recently i finished up working on pv disk io support for hvm solaris domains when running under the xvm hypervisor. after finishing this work i gave a presentation on it to the solaris sustaining organization. i figured that my slides might be of interest to other folks who want to know how hvm pv disk io work on solaris, so i'm posting my presentation here.

solaris hvm pv disk io support is noticeably different from linux hvm pv disk io support. specifically, when running solaris in an hvm domain with pv disk access enabled, the pv disk paths don't change. ie, the disks in an hvm domU that is using pv io still look like ide disks, although internally requests to these ide disks get routed to dom0 via paravirtualized disk device drivers. the magic is all done via driver replacement and layering. see my presentationfor all the gory details.

Friday Aug 29, 2008

running filemerge on opensolaris

to continue with the theme of my last post, i recently discovered teamware filemerge doesn't run on opensolaris because of missing X and tool talk components. at first you might might think this is a non-issue because OS/Net has moved away from teamware to hg. but aside from having to deal with older non-hg workspaces, i've discovered that hgsetup(1) (a onbld utility that configures hg for working with OS/Net sources) configures hg to use teamware's filemerge for resolving merge conflicts. (luckily the hg merge command fails gracefully when filemerge fails to run.) a little digging revealed that the problem can be solved by installing the following packages:
    SUNWdtcor SUNWdtdmr SUNWtltk SUNWxwrtl SUNWmfrun
once again, if you have acces to SWAN, you can run the following commands to fix the problem:
    plat=x; [ `uname -p` = sparc ] && plat=s
    ver=`uname -v | sed 's:.\*_::'`
    pkgadd -d $dir SUNWdtcor SUNWdtdmr SUNWtltk SUNWxwrtl SUNWmfrun

Tuesday Aug 26, 2008

building OS/Net on opensolaris

i use my desktop as my primary build machine, so i was a little surprised after upgrading my desktop to opensolaris i could no longer do full ON (aka OS/Net) nightly builds. i discovered that there were two problems. the first is a known bug:

    6733971 Lint targets not POSIX shell (=ksh93) safe in OS/Net
which you can easily workaround by replacing /sbin/sh (which in opensolaris is ksh93) with the old sh from nevada. if you have access to SWAN the following commands, run as root, should fix the problem:
    ver=`uname -v | sed 's:.\*_::'`
    plat=`uname -p`
    rm /sbin/sh
    cp -p /net/onnv.eng/export/snapshot/onnv_$ver/proto/root_$plat/sbin/sh /sbin/

the second issue i ran into is a problem caused by missing wbem header files and .jar archives. this issue can be worked around by installing the following nevada packages:

    SUNWadmj SUNWwbapi SUNWwbdev SUNWjsnmp SUNWwbcou
once again, if you have acces to SWAN, you can run the following commands to fix the problem:
    plat=x; [ `uname -p` = sparc ] && plat=s
    ver=`uname -v | sed 's:.\*_::'`
    pkgadd -d $dir SUNWadmj SUNWwbapi SUNWwbdev SUNWjsnmp SUNWwbcou

Thursday Jul 24, 2008

moving from nevada and live upgrade to opensolaris

so recently i made the plunge into running opensolaris. previously both my desktop and laptop were running nevada and i was using live upgrade to update the systems every two weeks to the latest build. on my desktop i had four live upgrade boot environments and on my laptop i had three. i also had zpools with lots of data on both systems.

ideally i wanted to be able to install opensolaris into an existing slice on my disks that was currently in use as a live upgrade boot environment. this would allow me to preserve my existing zpools and not have to backup and restore my entire system. it would also mean that i'd have my other nevada BE's left around in case i needed to go back to them for some reason. unfortunately, i discovered that the new opensolaris installer will only install to an entire disk or a partition, installing to an existing slice is not an option.

so on my laptop i underwent the time consuming process of backing up and restoring all my zpool data via zfs send/receive -R to an external usb hard drive, and doing a full fresh install. for my desktop i decided to try a different approach. i decided to install manually into an existing slice while running nevada. the rest of this blog entry will document how i did this.

<disclaimer> i'm documenting this because others have expressed an interest in knowing how to do this. that said, this process should not be attempted by the faint of heart and this takes you pretty far off the "supported" reservation. (i'm still running into bugs that no one else has seen because no one else installs systems this way.) while this seems to have worked out fine for me, ymmv. </disclaimer>

Step 1 - free up a slice and create a zpool to install to. to do this i deleted an existing live upgrade BE and created a zpool on that slice. i also made sure to manually set all the zfs properties that the opensolaris installer seems to set.

    # free up a slice from an existing BE
    lufslist disk1a_nv_91 | grep ' / '
    ludelete disk1a_nv_91
    # create and configure a zpool on that slice
    zpool create -f rpool c1t1d0s0
    zfs set compression=on rpool
    zfs create -p rpool/ROOT/opensolaris/opt
    zfs set canmount=noauto rpool/ROOT/opensolaris
    zfs set canmount=noauto rpool/ROOT/opensolaris/opt
    zfs set mountpoint=legacy rpool/ROOT/opensolaris
    zfs set mountpoint=legacy rpool/ROOT/opensolaris/opt

Step 2 - install a copy of ips. you need ips to install opensolaris. normally when you boot the opensolaris cd, you have ips, but i needed to install a copy in nevada. the good news is that an ips gate build creates sysv packages that allow ips to be installed on nevada. the bad news is that pre-built packages are not yet available for download. so i just pulled a copy of the latest version of ips from the ips gate and built it. you need a pretty recent version of nevada todo this (i was running snv_91).

    # setup misc environment variables
    PATH=$WS/gate/proto/root_`uname -p`/usr/bin:$PATH
    PYTHONPATH=$WS/gate/proto/root_`uname -p`/usr/lib/python2.4/vendor-packages
    # download and build the ips source in /tmp/pkg
    mkdir $WS
    cd $WS
    hg clone ssh:// e
    cd gate/src
    make install

Step 3 - install an opensolaris image into our zpool. i figured out how to do this by talking to folks on the ips team and also by looking at the ips brand zone install script (which is at /usr/lib/brand/ipkg/pkgcreatezone). some of the things that i needed todo are pretty hacky and non-obvious and in may represent bugs in the existing system that should be fixed.

    # we're going to do our install in /a
    PKG_IMAGE=/a; export PKG_IMAGE
    # mount our zpool on /a
    mkdir -p $PKG_IMAGE
    mount -F zfs rpool/ROOT/opensolaris $PKG_IMAGE
    mkdir -p $PKG_IMAGE/opt
    mount -F zfs rpool/ROOT/opensolaris/opt $PKG_IMAGE/opt
    # create the basic opensolaris install image.
    pkg image-create -F -a $PKG_IMAGE
    pkg refresh
    pkg install entire@0.5.11-0.93
    pkg install SUNWcsd SUNWcs
    pkg install redistributable
    # seed the initial smf repository
    cp $PKG_IMAGE/lib/svc/seed/global.db $PKG_IMAGE/etc/svc/repository.db
    chmod 0600 $PKG_IMAGE/etc/svc/repository.db
    chown root:sys $PKG_IMAGE/etc/svc/repository.db
    # setup smf profiles
    ln -s ns_files.xml $PKG_IMAGE/var/svc/profile/name_service.xml
    ln -s generic_limited_net.xml $PKG_IMAGE/var/svc/profile/generic.xml
    ln -s inetd_generic.xml $PKG_IMAGE/var/svc/profile/inetd_services.xml
    ln -s platform_none.xml $PKG_IMAGE/var/svc/profile/platform.xml
    # mark the new system image as uninstalled
    sysidconfig -b $PKG_IMAGE -a /lib/svc/method/sshd
    # configure our new /etc/vfstab
    printf "rpool/ROOT/opensolaris -\\t/\\t\\tzfs\\t-\\tno\\t-\\n" >> $PKG_IMAGE/etc/vfstab
    chmod a+r $PKG_IMAGE/etc/vfstab
    # carry over our curent swap settings to the new image
    grep swap /etc/vfstab | grep -v \^swap >> $PKG_IMAGE/etc/vfstab
    # turn off root as a role
    print "/\^root::::type=role;\\ns/\^root::::type=role;/root::::/\\nw" |\\
        ed -s $PKG_IMAGE/etc/user_attr
    # delete the "jack" user
    print "/\^jack:/d\\nw" | ed -s $PKG_IMAGE/etc/passwd
    chmod u+w $PKG_IMAGE/etc/shadow
    print "/\^jack:/d\\nw" | ed -s $PKG_IMAGE/etc/shadow
    chmod u-w $PKG_IMAGE/etc/shadow
    # copy of my existing host sshd keys
    cp -p /etc/ssh/\*key\* $PKG_IMAGE/etc/ssh
    # configure /dev in the new image
    devfsadm -R $PKG_IMAGE
    ln -s ../devices/pseudo/sysmsg@0:msglog $PKG_IMAGE/dev/msglog
    # update the boot archive in the new image
    bootadm update-archive -R $PKG_IMAGE
    # workaround 6717352
    chmod a-x $PKG_IMAGE/usr/lib/gvfsd-trash
    # workaround 6728023
    rm -f $PKG_IMAGE/etc/gtk-2.0/gdk-pixbuf.loaders
    rm -f $PKG_IMAGE/etc/64/gtk-2.0/gdk-pixbuf.loaders

Step 4 - modify grub so we can boot the new image. at this point the new image is pretty much ready to boot. all we need todo is configure grub so that it can boot the new image. unfortunately, this is probably the trickiest step in this entire process, since if you screw it up you could lose your grub menu, which makes it difficult to boot any boot environment. so here's what i did:

    # update to the latest version of grub (this command generated
    # some errors which i ignored).
    $PKG_IMAGE/boot/solaris/bin/update_grub -R $PKG_IMAGE
    # create an informational grub menu in the install image that
    # points us to the real grub menu.
    cat <<-EOF > $PKG_IMAGE/boot/grub/menu.lst
    #                                                                       #
    # For zfs root, menu.lst has moved to /rpool/boot/grub/menu.lst.        #
    #                                                                       #
    # create the new real grub menu
    cat <<-EOF > /rpool/boot/grub/menu.lst2
    default 0
    timeout 10
    splashimage /boot/grub/splash.xpm.gz
    title Solaris 2008.11 snv_93 X86
    findroot (pool_rpool,0,a)
    bootfs rpool/ROOT/opensolaris
    kernel\\$ /platform/i86pc/kernel/\\$ISADIR/unix -k -B \\$ZFS-BOOTFS,console=ttya
    module\\$ /platform/i86pc/\\$ISADIR/boot_archive
    title Solaris xVM
    findroot (pool_rpool,0,a)
    bootfs rpool/ROOT/opensolaris
    kernel\\$ /boot/\\$ISADIR/xen.gz com1=9600,8n1 console=com1
    module\\$ /platform/i86xpv/kernel/\\$ISADIR/unix /platform/i86xpv/kernel/\\$ISADIR/unix -k -B \\$ZFS-BOOTFS
    module\\$ /platform/i86pc/\\$ISADIR/boot_archive
    # make the grub menu files readable by everyone.
    chmod a+r $PKG_IMAGE/boot/grub/menu.lst
    chmod a+r /rpool/boot/grub/menu.lst
    # setup /etc/bootsign so that grub can find this zpool
    mkdir -p /rpool/etc
    echo pool_rpool > /rpool/etc/bootsign

Step 5 - cross your fingers and reboot. before doing this, you might want to copy over your existing grub menu entries to the new grub menu file at /rpool/boot/grub/menu.lst. once you're ready to reboot into opensolaris, you can do the following:

    umount $PKG_IMAGE/opt
    umount $PKG_IMAGE

Wednesday Mar 14, 2007

Tech Days in Kuala Lumpur

Last week I presented a talk at the Sun Tech Days Conference in Kuala Lumpur, Malaysia about most of the virtualization work going on in OpenSolaris. I covered a lot of information about technologies like Zones, BrandZ, Xen, and Crossbow. The attendance was great (over a hundred people) and after a full day of OpenSolaris presentations we also had a BoF where we sat around for almost 2 hours discussing different aspects of OpenSolaris. Since then I've gotten a couple requests for the slides from my presentation, so now you can get them here.

Wednesday Dec 14, 2005

Using branded zones on a laptop

I'm figuring that now that we've released BrandZ there are going to be people out there that want to install linux branded zones and run applications that might not be available for x86 solaris (say acroread.) If you have a machine with a static network configuration then this is will be pretty easy. (Create a linux zone with a static ip, log into it, and run your application.) But, if you're like me and want to be able to do this on your laptop where the network environment may be changing it takes a bit more work. So now I'll document my current laptop configuration, which I've setup to allow me to easily run applications in multiple branded and non-branded zones in a changing network environment. To support running multiple zones I had to create a local subnet on my laptop. (I randomly chose, you could choose a different network. Also, i used the iprb network interface on my laptop, if you have a different network interface then substitute it's name in place of iprb in the commands below.) Here's what I did in the global zone to set this up:

- added entries to /etc/netmasks:

- added entries to /etc/hosts:
> lnetwork
> lrouter
> lhost
> lzone1
> lzone2
> lzone3
> lzone4
> lbroadcast

- created /etc/hostname.iprb0 with the following content:
> addif lhost

- reboot [1]

Now whenever my system boots up I have a virtual interface (iprb0:1) plumbed up on a local subnet. iprb0 is still free so that all my scripts which setup dhcp on interface will continue to work. Next I created a branded centos linux zone with a network interface on this new local network.

# cat > /tmp/zonecfg.txt <<-EOF
create -B lx
set autoboot=true
set zonepath=/export/zones/lzone1
add net
set physical=iprb0
set address=lzone1/29
# zonecfg -z lzone1 -f /tmp/zonecfg.txt
# zoneadm -z lzone1 install -d <path to install archives>
# zoneadm -z lzone1 boot

After booting the zone, I can see that I have another virtual interface plumbed on my machine (iprb0:2) that is allocated to the new zone. At this point the only way to log into the zone is via zlogin. The reason for this is that zonecfg simply allocated networking resources to the zone and the linux processes in the zone are not yet aware of (or configured to use) those resource. So now we'll log into the linux zone and configure the network:

# zlogin lzone1
[Connected to zone 'lzone1' pts/4]
-bash-2.05b# cat > /etc/sysconfig/network <<-EOF
-bash-2.05b# exit
# zoneadm -z lzone1 reboot

Once the zone finishes rebooting networking should be enabled within the zone and i should be able to log into it via ssh:

edp@squee% ssh root@lzone1
The authenticity of host 'lzone1 (' can't be established.
RSA key fingerprint is be:49:a8:09:8c:19:18:cc:f2:1c:e3:84:c7:76:d7:5d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'lzone1,' (RSA) to the list of known hosts.
root@lzone1's password: <the default password is "root">
Last login: Tue Nov 29 16:52:22 2005 from
Welcome to your shiny new Linux zone.

Sweet! Now since ssh supports X forwarding I can simply run X applications like xterm, acroread, real player, or glquake without having to do any xauth/xhost/DISPLAY magic.

Course after doing all this you might want to create yourself a local user account in the linux zone and loop back mount your home directory from the global zone so you can easily read all those .pdf documents.

Well this is all fine and whizzy (and it let's me conveniently run linux apps and do brandz development on my laptop) but then what happens when you hook your laptop up to a real network and discover that you want to be able to access it from within the linux zone? Well, since the private little network we created isn't routed you can't do this. But hey, solaris has ipfilter and ipnat, so with a little help from an old blog entry by mike ditto we can get this working. Basically, we'll set up ipnat to do forwarding for the new local subnet we created. here's what I did in the global zone:

- uncomment or create the following entry in /etc/ipf/pfil.ap
> iprb  -1      0       pfil

- add the following entry to /etc/ipf/ipnat.conf
> map iprb0 -> 0

- enable ipfilters by running the following command:
> svcadm enable ipfilter

- reboot [1]

Then whenever I connect my laptop to a network I run the following additional commands in a shell script:


# get the ip address of our fake private subnet router from /etc/hosts
lrouter=`getent hosts lrouter | nawk '{print $1}'`

# get the ip address of the real network router
router=`netstat -rn | grep default | grep -v " $lrouter " | nawk '{print $2}'`

# send some data to the real network router so we look up it's arp address
ping -sn $router 1 1 >/dev/null

# record the arp address of the real router
router_arp=`arp $router | nawk '{print $4}'`

# delete any existing arp address entry for our fake private subnet router
arp -d $lrouter >/dev/null

# assign the real routers arp address to our fake private subnet router
arp -s $lrouter $router_arp

# route our private subnet through our fake private subnet router
route add default lrouter

Now all the local zones on my laptop can access whatever network I'm connected to via my iprb interface.



1 - It is possible to enable all the configuration listed above without rebooting the system. It involves re-arrange the configuration steps above and adding a few more steps. I included reboots since they simplified the documentation of the configuration process.




« February 2017