Saturday Nov 14, 2009

VirtualBox 3.1 Beta 1 now available for download

The first beta release of VirtualBox 3.1 is now available. You can download the binaries at http://download.virtualbox.org/virtualbox/3.1.0_BETA1/

Version 3.1 will be a major update. The following major new features have been added:
  • Migration of a VM session from one system to another (Teleportation)
  • VM states can be restored from an arbitrary snaphot (Branched snapshots)
  • New snapshots can be taken from other snapshots (allows for forking VMs)
  • 2D video acceleration for Windows guests using the host video hardware for overlay stretching and colour conversion
  • The network attachment type can be changed while a VM is running
  • Experimental support for new USB features in OpenSolaris hosts (nv124 and later)
  • Significant performance improvements for PAE and AMD64 guests (VT-x and AMD-V only; normal (non-nested) paging)
  • Experimental support for EFI (Extended Firmware Interface)

A particularly nice new feature in the user interface is a red/green visual indicator on critical resource allocation controls.




These are relative to the capacity of the system and not what resources are currently free, but should prevent accidental overallocation of resources like CPU or memory.

Important: You should reinstall the guest additions after upgrading to the beta release.

If you experience problems after upgrading to the beta release (such as SMP Solaris getting upset with heavy network load) you can reinstall an older version of VirtualBox. Just remember to reinstall the guest additions after the downgrade.

If your VMs are in an Inaccessible state check the XML configuration files. In the details panel you should see something like
Could not load the settings file 'D:
\\blah\\.VirtualBox\\Machines\\Chapterhouse (Solaris 10)\\Chapterhouse (Solaris 10).xml'

Element '{http://innotek.de/VirtualBox-settings}CpuIdTree': This element is not expected.
Result Code:          VBOX_E_XML_ERROR  (0x80BB000A)
Component:            VirtualBox
Interface:            IVirtualBox {3fab53a-199b-4526-a91a-93ff62e456b8}
Using the editor of your choice, open the XML file in the error message and remove the <CpuIdTree\\> tag. Save the file and hit the refresh button in the VirtualBox window and you should be able to restart your VM.

Please refer to the VirtualBox 3.1 Beta 1 Changelog for a complete list of changes and enhancements. Please do not use this VirtualBox Beta release on production machines. A VirtualBox Beta release should be considered a bleeding-edge release meant for early evaluation and testing purposes. Please read the known issues before installing this beta release.

Please use our 'VirtualBox Beta Feedback' forum at http://forums.virtualbox.org/viewforum.php?f=15 to report any problems with the Beta.

Technocrati Tags:

Friday Oct 30, 2009

VirtualBox 3.0.10 is now available

VirtualBox 3.0.10 has been released for all platforms. 3.0.10 is a maintenance release and contains quite a few performance and stability improvements.

# psrinfo -v
Status of virtual processor 0 as of: 10/30/2009 14:33:18
  on-line since 10/30/2009 08:18:50.
  The i386 processor operates at 2613 MHz,
	and has an i387 compatible floating point processor.
Status of virtual processor 1 as of: 10/30/2009 14:33:18
  on-line since 10/30/2009 08:18:52.
  The i386 processor operates at 2613 MHz,
	and has an i387 compatible floating point processor.
Status of virtual processor 2 as of: 10/30/2009 14:33:18
  on-line since 10/30/2009 08:18:52.
  The i386 processor operates at 2613 MHz,
	and has an i387 compatible floating point processor.
Status of virtual processor 3 as of: 10/30/2009 14:33:18
  on-line since 10/30/2009 08:18:52.
  The i386 processor operates at 2613 MHz,
	and has an i387 compatible floating point processor.


As you can see my 4 CPU Solaris guest is running just fine. I was encouraged after 3.0.8 but under heavy network loads the SMP guest would lock up. I've been playing youtube videos while doing live upgrades for the last couple of hours and everything seems to be running as expected.

Changes in VirtualBox 3.0.10 include
  • VMM: guest SMP stability fixes
  • VMM: fixed guru meditation with nested paging and SMP guests
  • VMM: changed VT-x/AMD-V usage to detect other active hypervisors; necessary for e.g. Windows 7 XP compatibility mode (Windows & Mac OS X hosts only)
  • VMM: guru meditation during SCO OpenServer installation and reboot (VT-x only)
  • VMM: fixed accessed bit handling in certain cases
  • VMM: fixed VPID flushing (VT-x only)
  • VMM: fixed broken nested paging for 64 bits guests on 32 bits hosts (AMD-V only)
  • VMM: fixed loading of old saved states/snapshots
  • Mac OS X hosts: fixed memory leaks
  • Mac OS X hosts (Snow Leopard): fixed redraw problem in a dual screen setup
  • Windows hosts: installer updates for Windows 7
  • Solaris hosts: out of memory handled incorrectly
  • Solaris hosts: the previous fix for #5077 broke the DVD host support on Solaris 10 (VBox 3.0.8 regression)
  • Linux hosts: fixed module compilation against Linux 2.6.32rc4 and later
  • Guest Additions: fixed possible guest OS kernel memory exhaustion
  • Guest Additions: fixed stability issues with SMP guests
  • Windows Additions: fixed color depth issue with low resolution hosts, netbooks, etc.
  • Windows Additions: fixed NO_MORE_FILES error when saving to shared folders
  • Windows Additions: fixed subdirectory creation on shared folders
  • Linux Additions: sendfile() returned -EOVERFLOW when executed on a shared folder
  • Linux Additions: fixed incorrect disk usage value (non-Windows hosts only)
  • Linux installer: register the module sources at DKMS even if the package provides proper modules for the current running kernel
  • 3D support: removed invalid OpenGL assertion
  • Network: fixed the Am79C973 PCNet emulation for QNX (and probably other) guests
  • VMDK: fix handling of split image variants
  • VHD: do not delay updating the footer when expanding the image to prevent image inconsistency
  • USB: stability fix for some USB 2.0 devices
  • GUI: added a search index to the .chm help file
  • GUI/Windows hosts: fixed CapsLock handling on French keyboards
  • Shared clipboard/X11 hosts: fixed a crash when clipboard initialisation failed
If you are running on a Solaris or OpenSolaris host you should reboot your systems after reloading the kernel driver as soon as it is convenient. This is a temporary situation that is expected to be resolved soon.

For a complete list of the changes in this release, please take a look at the VirtualBox Changelog.

Technocrati Tags:

Thursday Oct 15, 2009

Pun for the day: Hypervisor

    hypervisor (n) - the type of boss that wants regular status reports. Hourly.

Wednesday Oct 14, 2009

Pun for the day: Paravirtualization

    paravirtualization (n) - what happens when you run both VMware and VirtualBox in your environment.

Tuesday Oct 06, 2009

VirtualBox 3.0.8 is now available

VirtualBox 3.0.8 has been released for all platforms. 3.0.8 is a maintenance release and contains quite a few performance and stability improvements. For a list of the changes in this release, please take a look at the VirtualBox Changelog.

It's early yet but my testing of multi-cpu Solaris guests is quite encouraging. I've been stressing several of them quite hard and they are working as expected. I am still using IDE for my virtual platform disk controllers. I'll fire up a few tomorrow and see how SATA does. My Solaris desktop is now smokin' fast.

Technocrati Tags:

Wednesday Sep 09, 2009

VirtualBox 3.0.6 Beta Release 1 is now available for testing

The first beta release of VirtualBox 3.0.6 is now available. You can download the binaries at http://download.virtualbox.org/virtualbox/3.0.6_BETA1/

Version 3.0.6 is a maintenance update and contains the following fixes
  • VMM: fixed IO-APIC overhead for 32 bits Windows NT, 2000, XP and 2003 guests (AMD-V only; bug #4392)
  • VMM: fixed a Guru meditation under certain circumstances when enabling a disabled device (bug #4510)
  • VMM: fixed a Guru meditation when booting certain Arch Linux guests (software virtualization only; bug #2149)
  • VMM: fixed hangs with 64 bits Solaris & OpenSolaris guests (bug #2258)
  • VMM: fixed decreasing rdtsc values (AMD-V & VT-x only; bug #2869)
  • VMM: small Solaris/OpenSolaris performance improvements (VT-x only)
  • VMM: cpuid change to correct reported virtual CPU id in Linux
  • VMM: NetBSD 5.0.1 CD hangs during boot (VT-x only; bug #3947)
  • Solaris hosts: fixed a potential host system deadlock when CPUs were onlined or offlined
  • Python WS: fixed issue with certain enumerations constants having wrong value in Python webservices bindings
  • Python API: several threading and platform issues fixed
  • Python shell: added exportVM command
  • Python shell: improvments and bugfixes
  • Python shell: corrected detection of home directory in remote case
  • OVF: fixed XML comment handling that could lead to parser errors
  • Main: fixed a rare parsing problem with port numbers of USB device filters in machine settings XML
  • Main: restrict guest RAM size to 1.5 GB (32 bits Windows hosts only)
  • GUI: fixed rare crash when removing the last disk from the media manager (bug #4795)
  • Linux hosts: don't crash on Linux PAE kernel < 2.6.11 (in particular RHEL/CentOS 4); disable VT-x on Linux kernels < 2.6.13 (bug #1842)
  • Linux/Solaris hosts: correctly detect keyboards with less keys than usual (bug #4799)
  • Serial: fixed host mode (Solaris, Linux and Mac OS X hosts; bug #4672)
  • VRDP: Remote USB Protocol version 3
  • SATA: fixed hangs and BSODs introduced with 3.0.4 (#4695, #4739, #4710)
  • SATA: fixed a bug which prevented Windows 7 from detecting more than one hard disk
  • iSCSI: fix logging out when the target has dropped the connection, fix negotiation of simparameters, fix command resend when the connection was dropped, fix processing SCSI status for targets which do not use phase collapse
  • BIOS: fixed a bug preventing to start the OS/2 boot manager (2.1.0 regression, bug #3911)
  • PulseAudio: don't hang during VM termination if the connection to the server was unexpectedly terminated (bug #3100)
  • Mouse: fixed weird mouse behaviour with SMP (Solaris) guests
  • HostOnly Network: fixed failure in CreateHostOnlyNetworkInterface() on Linux (no GUID)
  • HostOnly Network: fixed wrong DHCP server startup while hostonly interface bringup on Linux
  • HostOnly Network: fixed incorrect factory and default MAC address on Solaris
  • DHCP: fixed a bug in the DHCP server where it allocated one IP address less than the configured range
  • E1000: fixed receiving of multicast packets
  • E1000: fixed up/down link notification after resuming a VM
  • NAT: fixed ethernet address corruptions (bug #4839)
  • NAT: fixed hangs, dropped packets and retransmission problems (bug #4343)
  • Bridged Network: fixed packet queue issue which might cause DRIVER_POWER_STATE_FAILURE BSOD for windows hosts (bug #4821)
  • Windows Additions: fixed a bug in VBoxGINA which prevented selecting the right domain when logging in the first time
  • Windows host installer: should now also work on unicode systems (like Korean, bug #3707)
  • Shared clipboard: do not send zero-terminated text to X11 guests and hosts (bug #4712)
  • Shared clipboard: use a less CPU intensive way of checking for new data on X11 guests and hosts (bug #4092)
  • Mac OS X hosts: prevent password dialogs in 32Bit Snow Leopard
  • Solaris hosts: worked around an issue that caused the host to hang (bug #4486)
  • Guest Additions: do not hide the host mouse cursor when restoring a saved state (bug #4700)
  • Windows guests: fixed issues with the display of the mouse cursor image (bugs #2603, #2660 and #4817)
  • SUSE 11 guests: fixed Guest Additions installation (bug #4506)
  • Guest Additions: support Fedora 12 Alpha guests (bugs #4731, #4733 and #4734)

Please do not use this VirtualBox Beta release on production machines. A VirtualBox Beta release should be considered a bleeding-edge release meant for early evaluation and testing purposes.

Please use our 'VirtualBox Beta Feedback' forum at http://forums.virtualbox.org/viewforum.php?f=15 to report any problems with the Beta.

Technocrati Tags:

Tuesday Jun 30, 2009

VirtualBox 3.0 is now available

VirtualBox 3.0 has been released and is now available for download. You can get binaries for Windows, OS X (Intel Mac), Linux and Solaris hosts at http://www.virtualbox.org/wiki/Downloads

Version 3.0 is a major update and contains the following new features
  • SMP guest machines, up to 32 virtual CPUs: requires Intel VT-x or AMD-V
  • Experimental support for Direct3D 8/9 in Windows guests: great for games and multimedia applications
  • Support for OpenGL 2.0 for Windows, Linux and Solaris guests

There is also a long list of bugs fixed in the new release. Please see the Changelog for a complete list.


In my early testing of VirtualBox 3.0 I have been most impressed by the SMP performance as well as the Direct3D passthrough from the guest. There are now some games that I can play in a guest virtual machine that I could not previously. Thanks to the VirtualBox development team for another great release.

Technocrati Tags:

Wednesday Jun 17, 2009

VirtualBox 3.0 Beta Release 1 is now available for testing

The first beta release of VirtualBox 3.0 is now available. You can download the binaries at http://download.virtualbox.org/virtualbox/3.0.0_BETA1/

Version 3.0 will be a major update. The following major new features were added:
  • Guest SMP with up to 32 virtual CPUs (VT-x and AMD-V only)
  • Windows guests: ability to use Direct3D 8/9 applications / games (experimental)
  • Support for OpenGL 2.0 for Windows, Linux and Solaris guests

In addition, the following items were fixed and/or added:
  • Virtual mouse device: eliminated micro-movements of the virtual mouse which were confusing some applications (bug #3782)
  • Solaris hosts: allow suspend/resume on the host when a VM is running (bug #3826)
  • Solaris hosts: tighten the restriction for contiguous physical memory under certain conditions
  • VMM: fixed occassional guru meditation when loading a saved state (VT-x only)
  • VMM: eliminated IO-APIC overhead with 32 bits guests (VT-x only, some Intel CPUs don’t support this feature (most do); bug #638)
  • VMM: fixed 64 bits CentOS guest hangs during early boot (AMD-V only; bug #3927)
  • VMM: performance improvements for certain PAE guests (e.g. Linux 2.6.29+ kernels)
  • GUI: added mini toolbar for fullscreen and seamless mode (Thanks to Huihong Luo)
  • GUI: redesigned settings dialogs
  • GUI: allow to create/remove one host-only network adapters
  • GUI: display estimated time for long running operations (e.g. OVF import/export)
  • GUI: Fixed rare hangs when open the OVF import/export wizards (bug #4157)
  • VRDP: support Windows 7 RDP client
  • Networking: fixed another problem with TX checksum offloading with Linux kernels up to version 2.6.18
  • VHD: properly write empty sectors when cloning of VHD images (bug #4080)
  • VHD: fixed crash when discarding snapshots of a VHD image
  • VBoxManage: fixed incorrect partition table processing when creating VMDK files giving raw partition access (bug #3510)
  • OVF: several OVF 1.0 compatibility fixes
  • Shared Folders: sometimes a file was created using the wrong permissions (2.2.0 regression; bug #3785)
  • Shared Folders: allow to change file attributes from Linux guests and use the correct file mode when creating files
  • Shared Folders: fixed incorrect file timestamps, when using Windows guest on a Linux host (bug #3404)
  • Linux guests: new daemon vboxadd-service to handle time syncronization and guest property lookup
  • Linux guests: implemented guest properties (OS info, logged in users, basic network information)
  • Windows host installer: VirtualBox Python API can now be installed automatically (requires Python and Win32 Extensions installed)
  • USB: Support for high-speed isochronous endpoints has been added. In addition, read-ahead buffering is performed for input endpoints (currently Linux hosts only). This should allow additional devices to work, notably webcams
  • NAT: allow to configure socket and internal parameters
  • Registration dialog uses Sun Online accounts now

Please do not use this VirtualBox Beta release on production machines. A VirtualBox Beta release should be considered a bleeding-edge release meant for early evaluation and testing purposes.

Please use our 'VirtualBox Beta Feedback' forum at http://forums.virtualbox.org/viewforum.php?f=15 to report any problems with the Beta.

Technocrati Tags:

Wednesday Apr 08, 2009

VirtualBox 2.2 has been released

VirtualBox 2.2 is now available. This is a major upgrade and includes the following new features

  • OVF (Open Virtualization Format) appliance import and export (see chapter 3.8, Importing and exporting virtual machines, User Manual page 55)
  • Host-only networking mode (see chapter 6.7, Host-only networking, User Manual page 88)
  • Hypervisor optimizations with significant performance gains for high context switching rates
  • Raised the memory limit for VMs on 64-bit hosts to 16GB
  • VT-x/AMD-V are enabled by default for newly created virtual machines
  • USB (OHCI & EHCI) is enabled by default for newly created virtual machines (Qt GUI only)
  • Experimental USB support for OpenSolaris hosts
  • Shared folders for Solaris and OpenSolaris guests
  • OpenGL 3D acceleration for Linux and Solaris guests (see chapter 4.8, Hardware 3D acceleration (OpenGL), User Manual page 70)
  • Added C API in addition to C++, Java, Python and Web Services

    For a complete list of new features and bug fixes, see the Changelog at http://www.virtualbox.org/wiki/Changelog.

    VirtualBox 2.2 can be downloaded from http://www.virtualbox.org/wiki/Downloads

    Important: Remember to reinstall the guest additions for all of your existing VMs.
  • Friday Mar 27, 2009

    VirtualBox 2.2 Beta 2 now available for testing


    Virtualbox 2.2 Beta 2 is now available for testing. In addition to the feature list from the Beta 1 announcement, Beta 2 contains the following fixes.

  • Raised the memory limit for VMs on 64-bit hosts to 16GB
  • Many fixes for OVF import/export
  • VMM: properly emulate RDMSR from the TSC MSR, should fix some NetBSD guests
  • IDE: fixed hard disk upgrade from XML-1.2 settings (bug #1518)
  • Hard disks: refuse to start the VM if a disk image is not writable
  • USB: Fixed BSOD on the host with certain USB devices (Windows hosts only; bug #1654)
  • E1000: properly handle cable disconnects (bug #3421)
  • X11 guests: prevented setting the locale in vboxmouse, as this caused problems with Turkish locales (bug #3563)
  • Linux additions: fixed typo when detecting Xorg 1.6 (bug #3555)
  • Windows guests: bind the VBoxMouse.sys filter driver to the correct guest pointing device (bug #1324)
  • Windows hosts: fixed BSOD when starting a VM with enabled host interface (bug #3414)
  • Linux hosts: do not leave zombies of VBoxSysInfo.sh (bug #3586)
  • VBoxManage: controlvm dvdattach did not work if the image was attached before
  • VBoxManage showvminfo: don't spam the release log if the additions don't support statistics information (bug #3457)
  • GUI: fail with an appropriate error message when trying to boot a read-only disk image (bug #1745)
  • LsiLogic: fixed problems with Solaris guests

    See the announcement in the VirtualBox Forum for more information. The beta can be downloaded from http://download.virtualbox.org/virtualbox/2.2.0_BETA2.

    As with Beta 1, this release is for testing and evaluation purposes, so please do not run this on a production system. I really don't need to remind you of this - you will get a nice graphical reminder every time you start the control program.

    Point your browser at the VirtualBox Beta Feedback Forum for more information about the beta program.

    Technocrati Tags:
  • Monday Mar 23, 2009

    VirtualBox 2.2 Beta 1 now available for testing


    Virtualbox 2.2 Beta 1 is now available for testing. It is a major release and contains many new features, including

  • OVF (Open Virtualization Format) appliance import and export
  • Host-only networking mode
  • Hypervisor optimizations with significant performance gains for high context switching rates
  • VT-x/AMD-V are enabled by default for newly created virtual machines
  • USB (OHCI & EHCI) is enabled by default for newly created virtual machines (Qt GUI only)
  • Experimental USB support for OpenSolaris hosts
  • Shared folders for Solaris and OpenSolaris guests
  • OpenGL 3d acceleration for Linux guests
  • Experimental support for OS X 10.6 (Snow Leopard) hosts running both the 64-bit and the 32-bit kernel

    as well as numerous bug fixes. See the announcement in the VirtualBox Forum for more information. The beta can be downloaded from http://download.virtualbox.org/virtualbox/2.2.0_BETA1.

    This release is for testing and evaluation purposes, so please do not run this on a production system. I really don't need to remind you of this - you will get a nice graphical reminder every time you start the control program.

    Since there are some configuration file changes in the new release, the beta program will also upgrade your configurations. Please back up your XML files before running it for the first time (or if you are like me, a zfs snapshot -r to your xvm datasets will do the trick).

    Point your browser at the VirtualBox Beta Feedback Forum for more information about the beta program.

    Technocrati Tags:
  • Tuesday Nov 04, 2008

    Solaris and OpenSolaris coexistence in the same root zpool

    Some time ago, my buddy Jeff Victor gave us FrankenZone. An idea that is disturbingly brilliant. It has taken me a while, but I offer for your consideration VirtualBox as a V2P platform for OpenSolaris. Nowhere near as brilliant, but at least as unusual. And you know that you have to try this out at home.

    Note: This is totally a science experiment. I fully expect to see the two guys from Myth Busters showing up at any moment. It also requires at least build 100 of OpenSolaris on both the host and guest operating system to work around the hostid difficulties.

    With the caveats out of the way, let me set the back story to explain how I got here.

    Until virtualization technologies become ubiquitous and nothing more than BIOS extensions, multi-boot configurations will continue to be an important capability. And for those working with [Open]Solaris there are several limitations that complicate this unnecessarily. Rather than lamenting these, the possibility of leveraging ZFS root pools, now in Solaris 10 10/08, should offer up some interesting solutions.

    What I want to do is simple - have a single Solaris fdisk partition that can have multiple versions of Solaris all bootable with access to all of my data. This doesn't seem like much of a request, but as of yet this has been nearly impossible to accomplish in anything close to a supportable configuration. As it turns out the essential limitation is in the installer - all other issues can be handled if we can figure out how to install OpenSolaris into an existing pool.

    What we will do is use our friend VirtualBox to work around the installer issues. After installing OpenSolaris in a virtual machine we take a ZFS snapshot, send it to the bare metal Solaris host and restore it in the root pool. Finally we fix up a few configuration files to make everything work and we will be left with a single root pool that can boot Solaris 10, Solaris Express Community Edition (nevada), and OpenSolaris.

    How cool is that :-) Yeah, it is that cool. Let's proceed.

    Prepare the host system

    The host system is running a fresh install of Solaris 10 10/08 with a single large root zpool. In this example the root zpool is named panroot. There is also a separate zpool that contains data that needs to be preserved in case a re-installation of Solaris is required. That zpool is named pandora, but it doesn't matter - it will be automatically imported in our new OpenSolaris installation if all goes well.
    # lustatus 
    Boot Environment           Is       Active Active    Can    Copy      
    Name                       Complete Now    On Reboot Delete Status    
    -------------------------- -------- ------ --------- ------ ----------
    s10u6_baseline             yes      no     no        yes    -         
    s10u6                      yes      no     no        yes    -         
    nv95                       yes      yes    yes       no     -         
    nv101a                     yes      no     no        yes    -    
    
         
    # zpool list
    NAME      SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
    pandora  64.5G  56.9G  7.61G    88%  ONLINE  -
    panroot    40G  26.7G  13.3G    66%  ONLINE  -
    
    One challenge that came up was the less than stellar performance of ssh over the VirtualBox NAT interface. So rather than fight this I set up a shared NFS file system in the root pool to stage the ZFS backup file. This made the process go much faster.

    In the host Solaris system
    # zfs create -o sharenfs=rw,anon=0 -o mountpoint=/share panroot/share
    

    Prepare the OpenSolaris virtual machine

    If you have not already done so, get a copy of VirtualBox, install it and set up a virtual machine for OpenSolaris.

    Important note: Do not install the VirtualBox guest additions. This will install some SMF services that will fail when booted on bare metal.

    Send a ZFS snapshot to the host OS root zpool

    Let's take a look around the freshly installed OpenSolaris system to see what we want to send.

    Inside the OpenSolaris virtual machine
    bash-3.2$ zfs list
    NAME                     USED  AVAIL  REFER  MOUNTPOINT
    rpool                   6.13G  9.50G    46K  /rpool
    rpool/ROOT              2.56G  9.50G    18K  legacy
    rpool/ROOT/opensolaris  2.56G  9.50G  2.49G  /
    rpool/dump               511M  9.50G   511M  -
    rpool/export            2.57G  9.50G  2.57G  /export
    rpool/export/home        604K  9.50G    19K  /export/home
    rpool/export/home/bob    585K  9.50G   585K  /export/home/bob
    rpool/swap               512M  9.82G   176M  -
    
    My host system root zpool (panroot) already has swap and dump, so these won't be needed. And it also has an /export hierarchy for home directories. I will recreate my OpenSolaris Primary System Administrator user once on bare metal, so it appears the only thing I need to bring over is the root dataset itself.

    Inside the OpenSolaris virtual machine
    bash-3.2$ pfexec zfs snapshot rpool/ROOT/opensolaris@scooby
    bash-3.2$ pfexec zfs send rpool/ROOT/opensolaris@scooby > /net/10.0.2.2/share/scooby.zfs
    
    We are now done with the virtual machine. It can be shut down and the storage reclaimed for other purposes.

    Restore the ZFS dataset in the host system root pool

    In addition to restoring the OpenSolaris root pool, the canmount property should be set to noauto. I also destroy the NFS shared directory since it will no longer be needed.
    # zfs receive panroot/ROOT/scooby < /share/scooby.zfs
    # zfs set canmount=noauto panroot/ROOT/scooby
    # zfs destroy panroot/shared
    
    Now mount the new OpenSolaris root filesystem and fix up a few configuration files. Specifically
    • /etc/zfs/zpool.cache so that all boot environments have the same view of available ZFS pools
    • /etc/hostid to keep all of the boot environments using the same hostid. This is extremely important and failure to do this will leave some of your boot environments unbootable - which isn't very useful. /etc/hostid is new to build 100 and later.
    Rebuild the OpenSolaris boot archive and we will be done with that filesystem.
    # zfs set canmount=noauto panroot/ROOT/scooby
    # zfs set mountpoint=/mnt panroot/ROOT/scooby
    # zfs mount panroot/ROOT/scooby
    
    # cp /etc/zfs/zpool.cache /mnt/etc/zfs
    # cp /etc/hostid /mnt/etc/hostid
    
    # bootadm update-archive -f -R /mnt
    Creating boot_archive for /mnt
    updating /mnt/platform/i86pc/amd64/boot_archive
    updating /mnt/platform/i86pc/boot_archive
    
    # umount /mnt
    
    Make a home directory for your OpenSolaris administrator user (in this example the user is named admin). Also add a GRUB stanza so that OpenSolaris can be booted.
    # mkdir -p /export/home/admin
    # chown admin:admin /export/home/admin
    # cat > /panroot/boot/grub/menu.lst   <<DOO
    title Scooby
    root (hd0,3,a)
    bootfs panroot/ROOT/scooby
    kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS
    module$ /platform/i86pc/$ISADIR/boot_archive
    DOO
    
    At this point we are done. Reboot the system and you should see a new GRUB stanza for our new OpenSolaris installation (scooby). Cue large audience applause track.

    Live Upgrade and OpenSolaris Boot Environment Administration

    On interesting side effect, on the positive side, is the healthy interaction of Live Upgrade and beadm(1M). For your Solaris and nevada based installations you can continue to use lucreate(1M), luupgrade(1M), and luactivate(1M). On the OpenSolaris side you can see all of your Live Upgrade boot environments as well as your OpenSolaris boot environments. Note that we can create and activate new boot environments as needed.

    When in OpenSolaris
    # beadm list
    BE                           Active Mountpoint Space   Policy Created          
    --                           ------ ---------- -----   ------ -------          
    nv101a                       -      -          18.17G  static 2008-11-04 00:03 
    nv95                         -      -          122.07M static 2008-11-03 12:47 
    opensolaris                  -      -          2.83G   static 2008-11-03 16:23 
    opensolaris-2008.11-baseline R      -          2.49G   static 2008-11-04 11:16 
    s10u6                        -      -          97.22M  static 2008-11-03 12:03 
    s10x_u6wos_07b               -      -          205.48M static 2008-11-01 20:51 
    scooby                       N      /          2.61G   static 2008-11-04 10:29 
    
    # beadm create doo
    # beadm activate doo
    # beadm list
    BE                           Active Mountpoint Space   Policy Created          
    --                           ------ ---------- -----   ------ -------          
    doo                          R      -          5.37G   static 2008-11-04 16:23 
    nv101a                       -      -          18.17G  static 2008-11-04 00:03 
    nv95                         -      -          122.07M static 2008-11-03 12:47 
    opensolaris                  -      -          25.5K   static 2008-11-03 16:23 
    opensolaris-2008.11-baseline -      -          105.0K  static 2008-11-04 11:16 
    s10u6                        -      -          97.22M  static 2008-11-03 12:03 
    s10x_u6wos_07b               -      -          205.48M static 2008-11-01 20:51 
    scooby                       N      /          2.61G   static 2008-11-04 10:29 
    
    
    For the first time I have a single Solaris disk environment that can boot Solaris 10, Solaris Express Community Edition (nevada) or OpenSolaris and have access to all of my data. I did have to add a mount for my shared FAT32 file system (I have an iPhone and several iPods - so Windows do occasionally get opened), but that is about it. Now off to the repository to start playing with all of the new OpenSolaris goodies like Songbird, Brasero, Bluefish and the Xen bits.

    Technocrati Tags:

    Tuesday Sep 23, 2008

    LDOMs or Containers, that is the question....

    An often asked question, do I put my application in a container (zone) or an LDOM ? My question in reply is why the or ? The two technologies are not mutually exclusive, and in practice their combination can yield some very interesting results. So if it is not an or, under what circumstances would I apply each of the technologies ? And does it matter if I substitute LDOMs with VMware, Xen, VirtualBox or Dynamic System Domains ? In this context all virtual machine technologies are similar enough to treat them as a class, so we will generalize to zones vs virtual machines for the rest of this discussion.

    First to the question of zones. All applications in Solaris 10 and later should be deployed in zones with the following exceptions
    • The restricted set of privileges in a zone will not allow the application to operate correctly
    • The application interacts with the kernel in an intimate fashion (reads or writes kernel data)
    • The application loads or unloads kernel modules
    • There is a higher level virtualization or abstraction technology in use that would obviate any benefits from deploying the application in a zone
    Presented a different way, if the security model allows the application to run and you aren't diminishing the benefits of a zone, deploy in a zone.

    Some examples of applications that have difficulty with the restrictive privileges would be security monitoring and auditing, hardware monitoring, storage (volume) management software, specialized file systems, some forms of application monitoring, intrusive debugging and inspection tools that use the kernel facilities such as the DTrace FBT provider. With the introduction of configurable zone privileges in Solaris 10 11/06, the number of applications that fit into this category should be few in number, highly specialized and not the type of application that you would want to deploy in a zone.

    For the higher level abstraction exclusion, think of something at the application layer that tries to hide the underlying platform. The best example would be Oracle RAC. RAC abstracts the details of the platform so that it can provide continuously operating database services. It also has the characteristic that it is itself a consolidation platform with some notion of resource controls. Given the complexity associated with RAC, it would not be a good idea to consolidate non-RAC workloads on a RAC cluster. And since zones are all about consolidation, RAC would trump zones in this case.

    There are other examples such as load balancers and transaction monitors. These are typically deployed on smaller horizontally scalable servers to provide greater bandwidth or increases service availability. Although they do not provide consolidation services, their sophisticated availability features might not interact well with the nonglobal zone restrictive security model. High availability frameworks such as SunCluster do work well with zones. Zones abstract applications in such a way that service failover configurations can be significantly simplified.



    Unless your application falls under one of these exemptions, the application should be deployed in a zone.

    What about virtual machines ? This type of abstraction is happening at a much lower level, in this case hardware resources (processors, memory, I/O). In contrast, zones abstract user space objects (processes, network stacks, resource controls). Virtual machines allow greater flexibility in running many types and versions of operating systems and applications but also eliminates many opportunities to share resources efficiently.

    Where would I use virtual machines ? Where you need the diversity of multiple operating systems. This can be different types of operating system (Windows, Linux, Solaris) or different versions or patch levels of the same operating system. The challenge here is that large sites can have servers at many different patch and update versions, not by design but as a result of inadequate patching and maintenance tools. Enterprise patch management tools (xVM OpsCenter), patch managers (PCA), or automated provisioning tools (OpsWare) can help reduce the number of software combinations and online maintenance using Live Upgrade can reduce the time and effort required to maintain systems.

    It is important to understand that zones are not virtual machines. Their differences and the implications of this are
    • Zones provide application isolation on a shared kernel
    • Zones share resources very efficiently (shared libraries, system caches, storage)
    • Zones have a configurable and restricted set of privileges
    • Zones allow for easy application of resource controls even in a complex dynamic application environment
    • Virtual machines provide relatively complete isolation between operating systems
    • Virtual machines allow consolidation of many types and versions of operating systems
    • Although virtual machines may allow oversubscription of resources, they provide very few opportunities to share critical resources
    • An operating system running in a virtual machine can still isolate applications using zones.
    And it is that last point that carries this conversation a bit farther. If the decision between zones and virtual machines isn't an or, under what conditions would it be an and, and what sort of benefit can be expected ?

    Consider the case of application consolidation. Suppose you have three applications: A, B and C. If they are consolidated without isolation then system maintenance becomes cumbersome as you can only patch or upgrade when all three application owners agree. Even more challenging is the time pressure to certify the newly patched or upgraded environment due to the fact that you have to test three things instead of one. Clearly isolation is a benefit in this case, and it is a persistent property (once isolated, forever isolated).

    Isolation using zones alone will be very efficient but there will be times when the common shared kernel will be inconvenient - approaching the problems of the non-isolated case. Isolation using virtual machines is simple and very flexible but comes with a cost that might be unnecessary.

    So why not do both ? Use zones to isolate the applications and use virtual machines for those times when you cannot support all of the applications with a common version of the operating system. In other words the isolation is a persistent property and the need for heterogeneous operating systems is temporary and specific. With some small improvements in the patching and upgrade tools, the time frame when you need heterogeneous operating systems can be reduced.

    Using our three applications as an example, A B and C are deployed in separate zones on a single system image, bare metal or in a virtual machine. Everything is operating spectacularly until a new OS upgrade is available which provides some important new functionality for application A. So application owner A wants to upgrade immediately, application B doesn't care one way or the other, and (naturally) application C has just gone into seasonal lock-down and cannot be altered for the rest of the year.

    Using zones and virtual machines provides a unique solution. Provision a new virtual machine with the new operating system software, either on the same platform by reassigning resources (CPU, memory) or on a separate platform. Next clone the zone running application A. Detach the newly cloned zone and migrate it to the new virtual machine. A new feature in Solaris 10 10/08 will automatically upgrade the new zone upon attachment to a server running newer software. Leave the original zone alone for some period of time in the event that an adverse regression appears that would force you to revert to the original version. Eventually the original zone can be reclaimed, but at a time when convenient.

    Migrate the other two applications at a convenient time using the same procedure. When all of the applications have been migrated and you are comfortable that they have been adequately tested, the old system image can be shut down and any remaining resources can be reclaimed for other purposes. Zones as the sole isolation agent cannot do this and virtual machines by themselves will require more administrative effort and higher resource consumption during the long periods when you don't need different versions of the operating system. Combined you get the best of both features.

    A less obvious example is ISV licensing. Consider the case of Oracle. Our friends at Oracle consider the combination of zones and capped resource controls as a hard partition method which allows you to license their software to the size of the resource cap, not the server. If you put Oracle in a zone on a 16 core system with a resource cap of 2 cores, you only pay for 2 cores. They have also made similar considerations for their Xen based Oracle VM product yet have been slow to respond to other virtual machine technologies. Zones to the rescue. If you deploy Oracle in a VM on a 16 core server you pay for all 16 cores. If you put that same application in a zone, in the same VM but cap the zone at 4 cores then you only pay for 4 cores.

    Zones are all about isolation and application of resouce controls. Virtual machines are all about heterogeneous operating systems. Use zones to persistently isolate applications. Use virtual machines during the times when a single operating system version is not feasible.

    This is only the beginning of the conversation. A new Blueprint based on measured results from some more interesting use cases is clearly needed. Jeff Savit, Jeff Victor and I will be working on this over the next few weeks and I'm sure that we will be blogging with partial results as they become available. As always, questions and suggestions are welcome. <script type="text/javascript"> var sc_project=1193495; var sc_invisible=1; var sc_security="a46f6831"; </script> <script type="text/javascript" src="http://www.statcounter.com/counter/counter.js"></script>

    Thursday Jun 21, 2007

    Updated Solaris Bootcamp Presentations

    I've had a great time traveling around the country talking about Solaris. It's not exactly a difficult thing - there's plenty to talk about. Many of you have asked for copies of the latest Solaris update, virtualization overview and ZFS deep dive. Rather than have you dig through a bunch of old blog entries about bootcamps from 2005, here they are for your convenience.



    I hope this will save you some digging though http://mediacast.sun.com and tons of old blogs.

    In a few weeks I'll post a new "What's New in Solaris" which will have some really cool things. But we'll save that for later.

    Technocrati Tags:

    Monday Jun 11, 2007

    True Virtualization ?

    While this is inspired by a recent conversation with a customer, I have seen the term "true virtualization" used quite a bit lately - mostly by people who have just attended a VMware seminar, and to a lesser extend folks from IBM trying to compare LPARS with Solaris zones. While one must give due credit to the fine folks at VMware for raising Information Technology (IT) awareness and putting virtualization in the common vocabulary, they hardly have cornered the market on virtualization and using the term "true virtualization" may reveal how narrow an understanding they have of the concept or an unfortunate arrogance that their approach is the only one that matters.

    Wikipedia defines virtualization as a technique for hiding the physical characteristics of computing resources from the way in which other systems, applications, or end users interact with those resources. While Wikipedia isn't the final authority, this definition is quite good and we will use it to start our exploration.

    So what is true virtualization ? Anything that (potentially) hides architectural details from running objects (programs, services, operating systems, data). No more, no less - end of discussion.

    Clearly VMware's virtualization products (ESX, Workstation) do that. They provide virtual machines that emulate the Intel x86 Instruction Set Architecture (ISA) so that operating systems think they are running on real hardware when in fact they are not. This type of virtualization would be classified as an abstraction type of virtual machines. But so is Xen, albeit with an interesting twist. In the case of Xen, a synthetic ISA based on the x86 is emulated removing some of the instructions that are difficult to virtualize. This makes porting a rather simple task - none of the user space code needs to be modified and the privileged code is generally limited to parts of the kernel that actually touch the hardware (virtual memory management, device drivers). In some respects, Xen is less of an abstraction as it does allow the virtual machines to see the architectural details thus permitting specific optimizations to occur that would be prohibited in the VMware case. And our good friends at Intel and AMD are adding new features to their processors to make virtualization less complicated and higher performance so the differences in approach between the VMware and Xen hypervisors may well blur over time.

    But is this true virtualization ? No, it is just one of many types of virtualization.

    How about the Java Virtual Machine (JVM) ? It is a run time executive that provides a virtualized environment for a completely synthetic ISA (although real pcode implementations have been done, they are largely for embedded systems). This is the magic behind write once and run anywhere and in general the approach works very well. So this is another example of virtualization - and also an abstraction type. And given the number of JVMs running around out there - if anyone is going to claim true virtualization, it would be the Java folks. Fortunately their understanding of the computer industry is broad and they are not arrogant - thus they would never suggest such folly.

    Sun4v Logical Domains (LDOMs) are a thin hypervisor based partitioning of a radically multithreaded SPARC processor. The guest domains (virtual machines) run on real hardware but generally have no I/O devices. These guest domains get their I/O over a private channel from a service domain (a special type of domain that owns devices and contains the real device drivers). So I/O is virtualized but all other operations are executed on real hardware. The hypervisor provides resource (CPU and memory) allocation and management and the private channels for I/O (including networking). This too is virtualization, but not like Xen or VMware. This is an example of partitioning. Another example is IBM (Power) LPARS albeit with a slightly different approach.

    Are there other types of virtualization ? Of course there are.

    Solaris zones are an interesting type of virtualization called OS Virtualization. In this case we interpose the virtualization layer between the privileged kernel layer the non-privileged user space. The benefit here is that all user space objects (name space, processes, address spaces) are completely abstracted and isolated. Unlike the methods previously discussed, the kernel and underlying hardware resources are not artificially limited, so the full heavy lifting capability of the kernel is available to all zones (subject to other resource management policies). The trade-off for this capability is that all zones share a common kernel. This has some availability and flexibility limitations that should be considered in a system design using zones. Non-native (Branded) zones offers some interesting flexibilities that we are just now beginning to exploit, so the future of this approach is very bright indeed. And if I read my competitors announcements correctly, even our good friends at IBM are embracing this approach with future releases of AIX. So clearly there is something to this thing called OS Virtualization.

    And there are other approaches as well - hybrids of the types we have been discussing. Special purpose libraries that either replace or interpose between common system libraries can provide some very nice virtualization capabilities - some of these transparent to applications, some not. The open source project Wine is a good example of this. User mode Linux and it's descendants offer some abilities to run an operating system as user mode program, albeit not particularly efficiently.

    QEMU is an interesting general purpose ISA simulator/translator that can be used to host non-native operating systems (such as Windows while running Solaris or Linux). The interesting thing about QEMU is that you can strip out the translation features with a special kernel module (kqemu) and the result is very efficient and nicely performing OS hosting (essentially simulating x86 running on x86). Kernel-based Virtual Machines (KVM) extends the QEMU capability to add yet another style of virtualization to Linux. It is not entirely clear at present whether KVM is really a better idea or just another not invented here (NIH) Linux project. Time will tell, but it would have been nice for the Linux kernel maintainers to take a page from OpenSolaris and embrace an already existing project that had some non-Linux vendor participation (\*BSD, Solaris, Plan 9, plus some mainstream Linux distributions). At the very least it is confusing as most experienced IT professionals will associate KVM with Keyboard Video and Mouse switching products. There are other commercial products such as QuickTransit that use a similar approach (ISA translation).

    And there are many many more.

    So clearly the phrase "true virtualization" has no common or useful meaning. Questioning the application or definition of the phrase will likely uncover a predisposition or bias that might be a good starting point to carry on an interesting dialog. And that's always a good idea.

    I leave you with one last thought. It is probably human nature to seek out the one uniform solution to all of our problems, the Grand Unification Theory being a great example. But in general, be skeptical of one size fits all approaches - while they may in fact fit all situations, they are generally neither efficient nor flattering. What does this have to do with virtualization ? Combining various techniques quite often will yield spectacular results. In other words, don't think VMware vs Zones - think VMware and Zones. In fact if you think Solaris, don't even think about zones, just do zones. If you need the additional abstraction to provide flexibility (heterogeneous or multiple version OS support) then use VMware or LDOMs. And zones.

    Next time we'll take a look at abstraction style virtualization techniques and see if we can develop a method of predicting the overhead that each technique might impose on a system. Since a good apples to apples benchmark is not likely to ever see the light of day, perhaps some good old fashioned reasoning can help us make sense of what information we can find.

    Technocrati Tags:
    About

    Bob Netherton is a Principal Sales Consultant for the North American Commercial Hardware group, specializing in Solaris, Virtualization and Engineered Systems. Bob is also a contributing author of Solaris 10 Virtualization Essentials.

    This blog will contain information about all three, but primarily focused on topics for Solaris system administrators.

    Please follow me on Twitter Facebook or send me email

    Search

    Archives
    « April 2014
    SunMonTueWedThuFriSat
      
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
       
           
    Today