Tuesday Oct 06, 2009

VirtualBox 3.0.8 is now available

VirtualBox 3.0.8 has been released for all platforms. 3.0.8 is a maintenance release and contains quite a few performance and stability improvements. For a list of the changes in this release, please take a look at the VirtualBox Changelog.

It's early yet but my testing of multi-cpu Solaris guests is quite encouraging. I've been stressing several of them quite hard and they are working as expected. I am still using IDE for my virtual platform disk controllers. I'll fire up a few tomorrow and see how SATA does. My Solaris desktop is now smokin' fast.

Technocrati Tags:

Wednesday Sep 30, 2009

Oracle reduces licensing multiplier for UltraSPARC T2+ systems

Taking a look at the Oracle Processor Core Factor Table, I noticed that on September 24 our friends at Oracle have reduced the per core licensing factor on UltraSPARC T2+ systems. This includes the Sun SPARC Enterprise T5140, T5240, T5440 and T6340 (blade). If you will permit the pun, this is very cool news indeed.

Technocrati Tags:

Monday Sep 28, 2009

VirtualBox 3.0.6 has been released

I'm a bit late getting this one out, but VirtualBox 3.0.6 has been released for all platforms. 3.0.6 is a maintenance release and contains quite a few performance and stability improvements. For a list of the changes in this release, please take a look at the VirtualBox Changelog.

The biggest improvement I've noticed is in the shared folders performance for Solaris guests. This is now quite usable and greatly simplifies my internal VBox configuration. I no longer need to maintain an internal NFS server VM for sharing my data across Solaris guests. There are also a number of fixes for networking, especially host only networking.

Unfortunately Solaris SMP guests are still a bit troublesome. I have found the best combination being OpenSolaris 2009.06 and using IDE disk controllers in the virtual platform. This combination has proved to be stable under quite a few intense tests. Up through 4 virtual CPUs.

Technocrati Tags:

Friday Sep 11, 2009

Facts about Albert Pujols



As the 2009 Major League Baseball season enters its final month, St. Louis Cardinals first baseman Albert Pujols looks to win his second National League Most Valuable Player (MVP) award. St. Louis Cardinals Twitter Nation (#stlcards) member Ray Kern (@Southside_Ray) thought it would be a good idea to share some of what we have learned about the Cardinal slugger. So in the spirit of Facts about Chuck Norris, here are some .....


    Facts about Albert Pujols

  1. Albert Pujols doesn't hit baseballs. Baseballs see Albert and fly out of the park on their own.
  2. An Albert Pujols home run can cause the tides to change.
  3. Albert Pujols bats third because if he hit in any other spot it would cause the world to collapse.
  4. If Albert Pujols were to play for any other team, that team would instantaneously transform into the St. Louis Cardinals.
  5. Albert Pujols could play every position. He just thinks other players deserve a chance too.
  6. While playing a home game for the Springfield Cardinals (AA farm team of the St. Louis Cardinals), Albert Pujols once hit a ball so far it was credited as a home run in both Springfield and St. Louis.
  7. An Albert Pujols home run equals two home homes by any other player.
  8. Anybody who sees Albert Pujols hit a home suddenly becomes a St. Louis Cardinals fan (Cubs fans excluded, naturally)
  9. The Food and Drug Administration (FDA) has approved pitching to Albert Pujols as a homeopathic cure for constipation.
  10. When the player in front of Albert Pujols hits a home run it is because the pitcher is watching Albert taking his practice swings.
  11. When Albert Pujols is in your town the local air traffic control won't allow airplanes within 25 miles of the stadium.
  12. The reason former Cardinal pitchers give up home runs to Albert Pujols is because they're too distracted by his glowing red eyes.
  13. Tony LaRussa refuses to let Albert Pujols pitch because it would make teammate Chris Carpenter look bad.
  14. When Albert Pujols hits a home run the person catching the ball has great luck for a year. And a broken hand for the next six weeks ... So beware!
  15. When other Cardinal players try to touch Albert Pujols' bat after he hits a home run, it burns their hands.
  16. Albert Pujols can only hit 2 home runs in a game. Unless, of course, he wants to hit more.
  17. When dreaming, Albert Pujols hits all the time... Oh wait thats not a dream!
  18. Tony LaRussa asks Albert Pujols for managing advice.
  19. Albert Pujols once traveled back in time. You may know him as Babe Ruth.
  20. Albert Pujols went down to the crossroads to make a deal with the Devil. Once the Devil saw Albert hit, he just shook his head and gave Albert his soul instead.
  21. Albert Pujols was the first right handed batter to hit a home run into McCovey Cove (San Francisco). From San Diego.
  22. If Ted Williams hit in front of Albert Pujols in 1947, Williams would have lost the triple crown. But he would have hit .531.
  23. Tweets about Albert Pujols are the number one reason that Twitter goes over capacity.
  24. They will eventually name the Most Valuable Player (MVP) award the Albert Pujols Award.
  25. The reason that Adam Wainwright and Chris Carpenter are Cy Young candidates in 2009 is that neither of them have to pitch to Albert Pujols.
  26. Interstate 70 will be renamed The Albert Pujols Highway. Interstate 70 was also given to Albert has part of his most recent contract negotiation.
  27. Swine flu once got a case of Albert Pujols.
  28. The games the St. Louis Cardinals lose is because Albert Pujols wants the season to be more interesting.
  29. Albert Pujols could hit a home run in every game. But he doesn't want to appear too good.
  30. Albert Pujols doesn't inject himself with human growth hormones (HGH). He injects himself with hot lava.
  31. Albert Pujols was the only baseball player ever drafted at birth.
I'm sure this list will grow, as will the legend of Albert Pujols. If you have any facts about Albert, please send them to me and I will update this list.

A very special thanks to Ray Kern for getting this list started and contributing most of the content. And thanks to Cardinal Nation on Twitter (#stlcards). If you want to follow some of the best fans in sports, check out #stlcards on Twitter.

Wednesday Sep 09, 2009

VirtualBox 3.0.6 Beta Release 1 is now available for testing

The first beta release of VirtualBox 3.0.6 is now available. You can download the binaries at http://download.virtualbox.org/virtualbox/3.0.6_BETA1/

Version 3.0.6 is a maintenance update and contains the following fixes
  • VMM: fixed IO-APIC overhead for 32 bits Windows NT, 2000, XP and 2003 guests (AMD-V only; bug #4392)
  • VMM: fixed a Guru meditation under certain circumstances when enabling a disabled device (bug #4510)
  • VMM: fixed a Guru meditation when booting certain Arch Linux guests (software virtualization only; bug #2149)
  • VMM: fixed hangs with 64 bits Solaris & OpenSolaris guests (bug #2258)
  • VMM: fixed decreasing rdtsc values (AMD-V & VT-x only; bug #2869)
  • VMM: small Solaris/OpenSolaris performance improvements (VT-x only)
  • VMM: cpuid change to correct reported virtual CPU id in Linux
  • VMM: NetBSD 5.0.1 CD hangs during boot (VT-x only; bug #3947)
  • Solaris hosts: fixed a potential host system deadlock when CPUs were onlined or offlined
  • Python WS: fixed issue with certain enumerations constants having wrong value in Python webservices bindings
  • Python API: several threading and platform issues fixed
  • Python shell: added exportVM command
  • Python shell: improvments and bugfixes
  • Python shell: corrected detection of home directory in remote case
  • OVF: fixed XML comment handling that could lead to parser errors
  • Main: fixed a rare parsing problem with port numbers of USB device filters in machine settings XML
  • Main: restrict guest RAM size to 1.5 GB (32 bits Windows hosts only)
  • GUI: fixed rare crash when removing the last disk from the media manager (bug #4795)
  • Linux hosts: don't crash on Linux PAE kernel < 2.6.11 (in particular RHEL/CentOS 4); disable VT-x on Linux kernels < 2.6.13 (bug #1842)
  • Linux/Solaris hosts: correctly detect keyboards with less keys than usual (bug #4799)
  • Serial: fixed host mode (Solaris, Linux and Mac OS X hosts; bug #4672)
  • VRDP: Remote USB Protocol version 3
  • SATA: fixed hangs and BSODs introduced with 3.0.4 (#4695, #4739, #4710)
  • SATA: fixed a bug which prevented Windows 7 from detecting more than one hard disk
  • iSCSI: fix logging out when the target has dropped the connection, fix negotiation of simparameters, fix command resend when the connection was dropped, fix processing SCSI status for targets which do not use phase collapse
  • BIOS: fixed a bug preventing to start the OS/2 boot manager (2.1.0 regression, bug #3911)
  • PulseAudio: don't hang during VM termination if the connection to the server was unexpectedly terminated (bug #3100)
  • Mouse: fixed weird mouse behaviour with SMP (Solaris) guests
  • HostOnly Network: fixed failure in CreateHostOnlyNetworkInterface() on Linux (no GUID)
  • HostOnly Network: fixed wrong DHCP server startup while hostonly interface bringup on Linux
  • HostOnly Network: fixed incorrect factory and default MAC address on Solaris
  • DHCP: fixed a bug in the DHCP server where it allocated one IP address less than the configured range
  • E1000: fixed receiving of multicast packets
  • E1000: fixed up/down link notification after resuming a VM
  • NAT: fixed ethernet address corruptions (bug #4839)
  • NAT: fixed hangs, dropped packets and retransmission problems (bug #4343)
  • Bridged Network: fixed packet queue issue which might cause DRIVER_POWER_STATE_FAILURE BSOD for windows hosts (bug #4821)
  • Windows Additions: fixed a bug in VBoxGINA which prevented selecting the right domain when logging in the first time
  • Windows host installer: should now also work on unicode systems (like Korean, bug #3707)
  • Shared clipboard: do not send zero-terminated text to X11 guests and hosts (bug #4712)
  • Shared clipboard: use a less CPU intensive way of checking for new data on X11 guests and hosts (bug #4092)
  • Mac OS X hosts: prevent password dialogs in 32Bit Snow Leopard
  • Solaris hosts: worked around an issue that caused the host to hang (bug #4486)
  • Guest Additions: do not hide the host mouse cursor when restoring a saved state (bug #4700)
  • Windows guests: fixed issues with the display of the mouse cursor image (bugs #2603, #2660 and #4817)
  • SUSE 11 guests: fixed Guest Additions installation (bug #4506)
  • Guest Additions: support Fedora 12 Alpha guests (bugs #4731, #4733 and #4734)

Please do not use this VirtualBox Beta release on production machines. A VirtualBox Beta release should be considered a bleeding-edge release meant for early evaluation and testing purposes.

Please use our 'VirtualBox Beta Feedback' forum at http://forums.virtualbox.org/viewforum.php?f=15 to report any problems with the Beta.

Technocrati Tags:

Monday Sep 07, 2009

Creating an iPhone Custom Ringtone



As a long time Palm user, the transition to the iPhone had more than it's share of difficulties. Thanks to some software updates (cut and paste) and VirtualBox, assimilation is complete and I have become a big fan of the little device. With that as a background, let's explore creating custom ringtones from items in your music library. And contrary to what iTunes tells you, it is possible to create custom ringtones from any DRM-free content in your music library - not just the items you purchased from the iTunes store.

1. Choosing the proper ringtone

Everybody in your contact list deserves their own distinctive ringtone - or at least the ones that actually call you. One word of warning though: make sure you choose an appropriate audio source and consider what happens if they accidently dial your phone while standing next to you.

True story: My first custom ringtone was for my spouse of 26 (and counting) years. We don't really have an "our song" so I opted for something instantly recognizable. In 5 notes or less. Deep Purple's Smoke on the Water is too obvious and pedestrian. What's the next best riff ? Think Ronnie Montrose.

As I am creating the ringtone, a buddy at work notices what I am doing and taps me on the shoulder. "Dude, are you like wanting to make her mad ? Are you nuts - do you only want to live with half of your stuff ?"

I guess my quizzical expression indicated that I needed further explanation.

"Frankenstein. Dude, you chose Frankenstein. ..... Frankenstein ???? Get it ?"

"Oh. OH!!!! I get it"

While Ronnie Montrose's opening riff from Edgar Winter's Frankenstein is one of the most recognizable in the rock catalog, it is clearly an inappropriate ringtone for a loved one. My bride has a great sense of humor, but I doubt that it would extend to this.

Please learn from my near fatal mistake - choose a good ringtone. In my example I will use Brainstorm from Hawkwind's third album, Doremi Fasol Latido. Another great opening riff. And no, I will not use this for my spouse, daughter, boss or any other person real or fictional.

2. Edit the source for your ringtone

A ringtone needs to be a short AAC encoded audio file. It should be at least 10 seconds in length and no longer than 30 seconds. If you are a computer wizard this is not a difficult task. Then again, if you are you wouldn't be reading this howto. Fortunately for the rest of us, iTunes can do this quite nicely.

After listening to Brainstorm, I have decided to use the first 25 seconds for my ringtone.

Right click the song you want to use for your ringtone - Brainstorm in this example.

Select Get Info and then click the Options tab.

Fill in the start and stop times in the boxes as I have on the left.

Note: The start and stop times do not have to be in whole seconds. Play with these numbers a bit of you want to cut out something like a voice or drum beat. A few tenths of a second can make a big difference.

Click OK and you will have an edited sound source that we will use to make our new ringtone.
Click image to enlarge 

Create an AAC encoded audio file



Right click your edited song and look for "Create AAC Version". If it is there skip over the next few steps.

If you see Create MP3 Version instead, don't panic. A lot of the other howtos that I found skipped this - and it's not exactly obvious what to do next.

The Create setting is based on your CD Import settings. The iTunes default is AAC which is correct for creating ringtones. It is also less than desirable if you want to get the most out of your iPod listening. Or (gasp) if you want to listen to your audio files with anything other than (another gasp) iTunes.

If you see Create MP3 Version as on the left all you need to do is change your CD Import preferences. Click Edit -> Preferences and while on the General tab click Import Settings.
Click image to enlarge 


Take note of your current settings. You will want to change them back when you are done.

Select AAC Encoder in the Import Using: drop box.

Since the internal speaker in the iPhone isn't exactly high fidelity, the encoding rate isn't important. Click OK and we are ready to make the audio file that will eventually become our ringtone.
Click image to enlarge 

Create an AAC encoded audio file - this time we mean it





Now, right click your edited audio souce and select Create AAC Version. You should see iTunes start the encoder and in a few seconds you will have another copy of your song. The length of the ringtone will be shown in the Time column.

Important: Before you forget, and you will - trust me, go back to the original song and clear out the start and stop times. Unless you really like listening to your ringtone in your iPod.

Select the original song, right click for the song menu, select Get Info, click the Options tab and uncheck Start Time and Stop Time.
Click image to enlarge 

What's in a name - quite a lot actually

The next step is to copy the new audio file to your Desktop where we will rename it. Click and drag the copy of the song you just created to your desktop. This should create a new icon, which is really a file in your Desktop directory. The filetype is .m4a which associates it with an iTunes song (audio file). What we need to do is rename it to something with a filetype of .m4r.

Before you forget, delete the audio file you created from your iTunes library. It is of no further use.

Now open up a command window. Click Start -> All Programs -> Accessories -> Command Prompt or Start -> Run Program and enter cmd. Either way works.

In the command shell, change your directory down one level to Desktop and rename the audio file from a .m4a to .m4r The icon on your desktop should immediately change from an audio to ring.
Click image to enlarge 


If it is not clear from the above screenshot, the command shell dialog went something like this. My commands are in bold
Microsoft Windows XP [Version 5.1.2600]
(C) Copyright 1985-2001 Microsoft Corp.

C:\\Documents and Settings\\bob>cd Desktop

C:\\Documents and Settings\\bob\\Desktop>dir
 Volume in drive C has no label.
 Volume Serial Number is 2CBE-1F9C

 Directory of C:\\Documents and Settings\\bob\\Desktop

09/06/2009  02:07 PM    <DIR>        .
09/06/2009  02:07 PM    <DIR>        ..
09/06/2009  02:02 PM              1,072,142  01 Brainstorm.m4a
               1 File(s)           1,072,142 bytes
               2 Dir(s)       10,997,751,040 free

C:\\Documents and Settings\\bob\\Desktop>ren "01 Brainstorm.m4a" "Brainstorm.m4r"
C:\\Documents and Settings\\bob\\Desktop>exit

Copy your new ringtone into iTunes



Drag your newly renamed ringtone into iTunes. You should now see it in the Ringtones part of your library.

If you have set up your iTunes to copy all files into your library (which should be the default), you can delete the icon from your Desktop.

Click image to enlarge 
This procedure should work with any non-DRM audio file in your iTunes music library. Now everyone in your contact list can have their own custom ringtone.

Tuesday Jun 30, 2009

VirtualBox 3.0 is now available

VirtualBox 3.0 has been released and is now available for download. You can get binaries for Windows, OS X (Intel Mac), Linux and Solaris hosts at http://www.virtualbox.org/wiki/Downloads

Version 3.0 is a major update and contains the following new features
  • SMP guest machines, up to 32 virtual CPUs: requires Intel VT-x or AMD-V
  • Experimental support for Direct3D 8/9 in Windows guests: great for games and multimedia applications
  • Support for OpenGL 2.0 for Windows, Linux and Solaris guests

There is also a long list of bugs fixed in the new release. Please see the Changelog for a complete list.


In my early testing of VirtualBox 3.0 I have been most impressed by the SMP performance as well as the Direct3D passthrough from the guest. There are now some games that I can play in a guest virtual machine that I could not previously. Thanks to the VirtualBox development team for another great release.

Technocrati Tags:

Wednesday Jun 17, 2009

VirtualBox 3.0 Beta Release 1 is now available for testing

The first beta release of VirtualBox 3.0 is now available. You can download the binaries at http://download.virtualbox.org/virtualbox/3.0.0_BETA1/

Version 3.0 will be a major update. The following major new features were added:
  • Guest SMP with up to 32 virtual CPUs (VT-x and AMD-V only)
  • Windows guests: ability to use Direct3D 8/9 applications / games (experimental)
  • Support for OpenGL 2.0 for Windows, Linux and Solaris guests

In addition, the following items were fixed and/or added:
  • Virtual mouse device: eliminated micro-movements of the virtual mouse which were confusing some applications (bug #3782)
  • Solaris hosts: allow suspend/resume on the host when a VM is running (bug #3826)
  • Solaris hosts: tighten the restriction for contiguous physical memory under certain conditions
  • VMM: fixed occassional guru meditation when loading a saved state (VT-x only)
  • VMM: eliminated IO-APIC overhead with 32 bits guests (VT-x only, some Intel CPUs don’t support this feature (most do); bug #638)
  • VMM: fixed 64 bits CentOS guest hangs during early boot (AMD-V only; bug #3927)
  • VMM: performance improvements for certain PAE guests (e.g. Linux 2.6.29+ kernels)
  • GUI: added mini toolbar for fullscreen and seamless mode (Thanks to Huihong Luo)
  • GUI: redesigned settings dialogs
  • GUI: allow to create/remove one host-only network adapters
  • GUI: display estimated time for long running operations (e.g. OVF import/export)
  • GUI: Fixed rare hangs when open the OVF import/export wizards (bug #4157)
  • VRDP: support Windows 7 RDP client
  • Networking: fixed another problem with TX checksum offloading with Linux kernels up to version 2.6.18
  • VHD: properly write empty sectors when cloning of VHD images (bug #4080)
  • VHD: fixed crash when discarding snapshots of a VHD image
  • VBoxManage: fixed incorrect partition table processing when creating VMDK files giving raw partition access (bug #3510)
  • OVF: several OVF 1.0 compatibility fixes
  • Shared Folders: sometimes a file was created using the wrong permissions (2.2.0 regression; bug #3785)
  • Shared Folders: allow to change file attributes from Linux guests and use the correct file mode when creating files
  • Shared Folders: fixed incorrect file timestamps, when using Windows guest on a Linux host (bug #3404)
  • Linux guests: new daemon vboxadd-service to handle time syncronization and guest property lookup
  • Linux guests: implemented guest properties (OS info, logged in users, basic network information)
  • Windows host installer: VirtualBox Python API can now be installed automatically (requires Python and Win32 Extensions installed)
  • USB: Support for high-speed isochronous endpoints has been added. In addition, read-ahead buffering is performed for input endpoints (currently Linux hosts only). This should allow additional devices to work, notably webcams
  • NAT: allow to configure socket and internal parameters
  • Registration dialog uses Sun Online accounts now

Please do not use this VirtualBox Beta release on production machines. A VirtualBox Beta release should be considered a bleeding-edge release meant for early evaluation and testing purposes.

Please use our 'VirtualBox Beta Feedback' forum at http://forums.virtualbox.org/viewforum.php?f=15 to report any problems with the Beta.

Technocrati Tags:

Thursday May 21, 2009

Getting Rid of Pesky Live Upgrade Boot Environments

As we discussed earlier, Live Upgrade can solve most of the problems associated with patching and upgrading your Solaris system. I'm not quite ready to post the next installment in the LU series quite yet, but from some of the comments and email I have received, there are two problems that I would like to help you work around.

Oh where oh where did that file system go ?

One thing you can do to stop Live Upgrade in its tracks is to remove a file system that it thinks another boot environment needs. This does fall into the category of user error, but you are more likely to run into this in a ZFS world where file systems can be created and destroyed with great ease. You will also run into a varient of this if you change your zone configurations without recreating your boot environment, but I'll save that for a later day.

Here is our simple test case:
  1. Create a ZFS file system.
  2. Create a new boot environment.
  3. Delete the ZFS file system.
  4. Watch Live Upgrade fail.

# zfs create arrakis/temp

# lucreate -n test
Checking GRUB menu...
System has findroot enabled GRUB
Analyzing system configuration.
Comparing source boot environment <s10u7-baseline> file systems with the
file system(s) you specified for the new boot environment. Determining
which file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment <test>.
Source boot environment is <s10u7-baseline>.
Creating boot environment <test>.
Cloning file systems from boot environment <s10u7-baseline> to create boot environment <test>.
Creating snapshot for <rpool/ROOT/s10u7-baseline> on <rpool/ROOT/s10u7-baseline@test>.
Creating clone for <rpool/ROOT/s10u7-baseline@test> on <rpool/ROOT/test>.
Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/test>.
Saving existing file </boot/grub/menu.lst> in top level dataset for BE <s10u6_baseline> as <mount-point>>//boot/grub/menu.lst.prev.
Saving existing file </boot/grub/menu.lst> in top level dataset for BE <test> as <mount-point>//boot/grub/menu.lst.prev.
Saving existing file </boot/grub/menu.lst> in top level dataset for BE <nv114> as <mount-point>//boot/grub/menu.lst.prev.
Saving existing file </boot/grub/menu.lst> in top level dataset for BE <route66> as <mount-point>//boot/grub/menu.lst.prev.
Saving existing file </boot/grub/menu.lst> in top level dataset for BE <nv95> as <mount-point>//boot/grub/menu.lst.prev.
File </boot/grub/menu.lst> propagation successful
Copied GRUB menu from PBE to ABE
No entry for BE <test> in GRUB menu
Population of boot environment <test> successful.
Creation of boot environment <test> successful.

# zfs destroy arrakis/temp

# luupgrade -t -s /export/patches/10_x86_Recommended-2009-05-14  -O "-d" -n test
System has findroot enabled GRUB
No entry for BE <test> in GRUB menu
Validating the contents of the media </export/patches/10_x86_Recommended-2009-05-14>.
The media contains 143 software patches that can be added.
All 143 patches will be added because you did not specify any specific patches to add.
Mounting the BE <test>.
ERROR: Read-only file system: cannot create mount point </.alt.tmp.b-59c.mnt/arrakis/temp>
ERROR: failed to create mount point </.alt.tmp.b-59c.mnt/arrakis/temp> for file system </arrakis/temp>
ERROR: unmounting partially mounted boot environment file systems
ERROR: cannot mount boot environment by icf file </etc/lu/ICF.5>
ERROR: Unable to mount ABE <test>: cannot complete lumk_iconf
Adding patches to the BE <test>.
Validating patches...

Loading patches installed on the system...

Cannot check name /a/var/sadm/pkg.
Unmounting the BE <test>.
The patch add to the BE <test> failed (with result code <1>).
The proper Live Upgrade solution to this problem would be to destroy and recreate the boot environment, or just recreate the missing file system (I'm sure that most of you have figured the latter part out on your own). The rationale is that the alternate boot environment no longer matches the storage configuration of its source. This was fine in a UFS world, but perhaps a bit constraining when ZFS rules the landscape. What if you really wanted the file system to be gone forever.

With a little more understanding of the internals of Live Upgrade, we can fix this rather easily.

Important note: We are about to modify undocumented Live Upgrade configuration files. The formats, names, and contents are subject to change without notice and any errors made while doing this can render your Live Upgrade configuration unusable.

The file system configurations for each boot environment are kept in a set of Internal Configuration Files (ICF) in /etc/lu named ICF.n, where n is the boot environment number. From the error message above we see that /etc/lu/ICF.5 is the one that is causing the problem. Let's take a look.
# cat /etc/lu/ICF.5
test:-:/dev/dsk/c5d0s1:swap:4225095
test:-:/dev/zvol/dsk/rpool/swap:swap:8435712
test:/:rpool/ROOT/test:zfs:0
test:/archives:/dev/dsk/c1t0d0s2:ufs:327645675
test:/arrakis:arrakis:zfs:0
test:/arrakis/misc:arrakis/misc:zfs:0
test:/arrakis/misc2:arrakis/misc2:zfs:0
test:/arrakis/stuff:arrakis/stuff:zfs:0

test:/arrakis/temp:arrakis/temp:zfs:0

test:/audio:arrakis/audio:zfs:0
test:/backups:arrakis/backups:zfs:0
test:/export:arrakis/export:zfs:0
test:/export/home:arrakis/home:zfs:0
test:/export/iso:arrakis/iso:zfs:0
test:/export/linux:arrakis/linux:zfs:0
test:/rpool:rpool:zfs:0
test:/rpool/ROOT:rpool/ROOT:zfs:0
test:/usr/local:arrakis/local:zfs:0
test:/vbox:arrakis/vbox:zfs:0
test:/vbox/fedora8:arrakis/vbox/fedora8:zfs:0
test:/video:arrakis/video:zfs:0
test:/workshop:arrakis/workshop:zfs:0
test:/xp:/dev/dsk/c2d0s7:ufs:70396830
test:/xvm:arrakis/xvm:zfs:0
test:/xvm/fedora8:arrakis/xvm/fedora8:zfs:0
test:/xvm/newfs:arrakis/xvm/newfs:zfs:0
test:/xvm/nv113:arrakis/xvm/nv113:zfs:0
test:/xvm/opensolaris:arrakis/xvm/opensolaris:zfs:0
test:/xvm/s10u5:arrakis/xvm/s10u5:zfs:0
test:/xvm/ub710:arrakis/xvm/ub710:zfs:0
The first step is to clean up the mess left by the failing luupgrade attempt. At the very least we will need to unmount the alternate boot environment root. It is also very likely that we will have to unmount a few temporary directories, such as /tmp and /var/run. Since this is ZFS we will also have to remove the directories created when these file systems were mounted.
# df -k | tail -3
rpool/ROOT/test      49545216 6879597 7546183    48%    /.alt.tmp.b-Fx.mnt
swap                 4695136       0 4695136     0%    /a/var/run
swap                 4695136       0 4695136     0%    /a/tmp

# luumount test
# umount /a/var/run
# umount /a/tmp
# rmdir /a/var/run /a/var /a/tmp

Next we need to remove the missing file system entry from the current copy of the ICF file. Use whatever method you prefer (vi, perl, grep). Once we have corrected our local copy of the ICF file we must propagate it to the alternate boot environment we are about to patch. You can skip the propagation if you are going to delete the boot environment without doing any other maintenance activities. The normal Live Upgrade operations will take care of propagating the ICF files to the other boot environments, so we should not have to worry about them at this time.
# mv /etc/lu/ICF.5 /tmp/ICF.5
# grep -v arrakis/temp /tmp/ICF.5 > /etc/lu/ICF.5 
# cp /etc/lu/ICF.5 `lumount test`/etc/lu/ICF.5
# luumount test
At this point we should be good to go. Let's try the luupgrade again.
# luupgrade -t -n test -O "-d" -s /export/patches/10_x86_Recommended-2009-05-14
System has findroot enabled GRUB
No entry for BE  in GRUB menu
Validating the contents of the media .
The media contains 143 software patches that can be added.
All 143 patches will be added because you did not specify any specific patches to add.
Mounting the BE <test>.
Adding patches to the BE <test>.
Validating patches...

Loading patches installed on the system...

Done!

Loading patches requested to install.

Approved patches will be installed in this order:

118668-19 118669-19 119214-19 123591-10 123896-10 125556-03 139100-02


Checking installed patches...
Verifying sufficient filesystem capacity (dry run method)...
Installing patch packages...

Patch 118668-19 has been successfully installed.
Patch 118669-19 has been successfully installed.
Patch 119214-19 has been successfully installed.
Patch 123591-10 has been successfully installed.
Patch 123896-10 has been successfully installed.
Patch 125556-03 has been successfully installed.
Patch 139100-02 has been successfully installed.

Unmounting the BE <test>.
The patch add to the BE <test> completed.
Now that the alternate boot environment has been patched, we can activate it at our convenience.

I keep deleting and deleting and still can't get rid of those pesky boot environments

This is an interesting corner case where the Live Upgrade configuration files get so scrambled that even simple tasks like deleting a boot environment are not possible. Every time I have gotten myself into this situation I can trace it back to some ill advised shortcut that seemed harmless at the time, but I won't rule out bugs and environment as possible causes.

Here is our simple test case: turn our boot environment from the previous example into a zombie - something that is neither alive nor dead but just takes up space and causes a mild annoyance.

Important note: Don't try this on a production system. This is for demonstration purposes only.
# dd if=/dev/random of=/etc/lu/ICF.5 bs=2048 count=2
0+2 records in
0+2 records out

# ludelete -f test
System has findroot enabled GRUB
No entry for BE <test> in GRUB menu
ERROR: The mount point </.alt.tmp.b-fxc.mnt> is not a valid ABE mount point (no /etc directory found).
ERROR: The mount point </.alt.tmp.b-fxc.mnt> provided by the <-m> option is not a valid ABE mount point.
Usage: lurootspec [-l error_log] [-o outfile] [-m mntpt]
ERROR: Cannot determine root specification for BE <test>.
ERROR: boot environment <test> is not mounted
Unable to delete boot environment.
Our first task is to make sure that any partially mounted boot environment is cleaned up. A df should help us here.
# df -k | tail -5
arrakis/xvm/opensolaris 350945280      19 17448377     1%    /xvm/opensolaris
arrakis/xvm/s10u5    350945280      19 17448377     1%    /xvm/s10u5
arrakis/xvm/ub710    350945280      19 17448377     1%    /xvm/ub710
swap                 4549680       0 4549680     0%    /.alt.tmp.b-fxc.mnt/var/run
swap                 4549680       0 4549680     0%    /.alt.tmp.b-fxc.mnt/tmp


# umount /.alt.tmp.b-fxc.mnt/tmp
# umount /.alt.tmp.b-fxc.mnt/var/run
Ordinarily you would use lufslist(1M) to try to determine which file systems are in use by the boot environment you are trying to delete. In this worst case scenario that is not possible. A bit of forensic investigation and a bit more courage will help us figure this out.

The first place we will look is /etc/lutab. This is the configuration file that lists all boot environments known to Live Upgrade. There is a man page for this in section 4, so it is somewhat of a public interface but please take note of the warning
 
        The lutab file must not be edited by hand. Any user  modifi-
        cation  to  this file will result in the incorrect operation
        of the Live Upgrade feature.
This is very good advice and failing to follow it has led some some of my most spectacular Live Upgrade meltdowns. But in this case Live Upgrade is already broken and it may be possible to undo the damage and restore proper operation. So let's see what we can find out.
# cat /etc/lutab
# DO NOT EDIT THIS FILE BY HAND. This file is not a public interface.
# The format and contents of this file are subject to change.
# Any user modification to this file may result in the incorrect
# operation of Live Upgrade.
3:s10u5_baseline:C:0
3:/:/dev/dsk/c2d0s0:1
3:boot-device:/dev/dsk/c2d0s0:2
1:s10u5_lu:C:0
1:/:/dev/dsk/c5d0s0:1
1:boot-device:/dev/dsk/c5d0s0:2
2:s10u6_ufs:C:0
2:/:/dev/dsk/c4d0s0:1
2:boot-device:/dev/dsk/c4d0s0:2
4:s10u6_baseline:C:0
4:/:rpool/ROOT/s10u6_baseline:1
4:boot-device:/dev/dsk/c4d0s3:2
10:route66:C:0
10:/:rpool/ROOT/route66:1
10:boot-device:/dev/dsk/c4d0s3:2
11:nv95:C:0
11:/:rpool/ROOT/nv95:1
11:boot-device:/dev/dsk/c4d0s3:2
6:s10u7-baseline:C:0
6:/:rpool/ROOT/s10u7-baseline:1
6:boot-device:/dev/dsk/c4d0s3:2
7:nv114:C:0
7:/:rpool/ROOT/nv114:1
7:boot-device:/dev/dsk/c4d0s3:2
5:test:C:0
5:/:rpool/ROOT/test:1
5:boot-device:/dev/dsk/c4d0s3:2
We can see that the boot environment named test is (still) BE #5 and has it's root file system at rpool/ROOT/test. This is the default dataset name and indicates that the boot environment has not been renamed. Consider the following example for a more complicated configuration.
# lucreate -n scooby
# lufslist scooby | grep ROOT
rpool/ROOT/scooby       zfs            241152 /                   -
rpool/ROOT              zfs       39284664832 /rpool/ROOT         -

# lurename -e scooby -n doo
# lufslist doo | grep ROOT
rpool/ROOT/scooby       zfs            241152 /                   -
rpool/ROOT              zfs       39284664832 /rpool/ROOT         -
The point is that we have to trust the contents of /etc/lutab but it does not hurt to do a bit of sanity checking before we start deleting ZFS datasets. To remove boot environment test from the view of Live Upgrade, delete the three lines in /etc/lutab starting with 5 (in this example). We should also remove it's Internal Configuration File (ICF) /etc/lu/ICF.5
# mv -f /etc/lutab /etc/lutab.old
# grep -v \^5: /etc/lutab.old > /etc/lutab
# rm -f /etc/lu/ICF.5

# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
s10u5_baseline             yes      no     no        yes    -         
s10u5_lu                   yes      no     no        yes    -         
s10u6_ufs                  yes      no     no        yes    -         
s10u6_baseline             yes      no     no        yes    -         
route66                    yes      no     no        yes    -         
nv95                       yes      yes    yes       no     -         
s10u7-baseline             yes      no     no        yes    -         
nv114                      yes      no     no        yes    -         
If the boot environment being deleted is in UFS then we are done. Well, not exactly - but pretty close. We still need to propagate the updated configuration files to the remaining boot environments. This will be done during the next live upgrade operation (lucreate, lumake, ludelete, luactivate) and I would recommend that you let Live Upgrade handle this part. The exception to this will be if you boot directly into another boot environment without activating it first. This isn't a recommended practice and has been the source of some of my most frustrating mistakes.

If the exorcised boot environment is in ZFS then we still have a little bit of work to do. We need to delete the old root datasets and any snapshots that they may have been cloned from. In our example the root dataset was rpool/ROOT/test. We need to look for any children as well as the originating snapshot, if present.
# zfs list -r rpool/ROOT/test
NAME                  USED  AVAIL  REFER  MOUNTPOINT
rpool/ROOT/test       234K  6.47G  8.79G  /.alt.test
rpool/ROOT/test/var    18K  6.47G    18K  /.alt.test/var

# zfs get -r origin rpool/ROOT/test
NAME             PROPERTY  VALUE                 SOURCE
rpool/ROOT/test  origin    rpool/ROOT/nv95@test  -
rpool/ROOT/test/var  origin    rpool/ROOT/nv95/var@test    
       
# zfs destroy rpool/ROOT/test/var
# zfs destroy rpool/ROOT/nv95/var@test
# zfs destroy rpool/ROOT/test
# zfs destroy rpool/ROOT/nv95@test
Important note:luactivate will promote the newly activated root dataset so that snapshots used to create alternate boot environments should be easy to delete. If you are switching between boot environments without activating them first (which I have already warned you about doing), you may have to manually promote a different dataset so that the snapshots can be deleted.

To BE or not to BE - how about no BE ?

You may find yourself in a situation where you have things so scrambled up that you want to start all over again. We can use what we have just learned to unwind Live Upgrade and start from a clean configuration. Specifically we want to delete /etc/lutab, the ICF and related files, all of the temporary files in /etc/lu/tmp and a few files that hold environment variables for some of the lu scripts. And if using ZFS we will also have to delete any datasets and snapshots that are no longer needed.
 
# rm -f /etc/lutab 
# rm -f /etc/lu/ICF.* /etc/lu/INODE.* /etc/lu/vtoc.*
# rm -f /etc/lu/.??*
# rm -f /etc/lu/tmp/* 

# lustatus
ERROR: No boot environments are configured on this system
ERROR: cannot determine list of all boot environment names

# lucreate -c scooby -n doo
Checking GRUB menu...
Analyzing system configuration.
No name for current boot environment.
Current boot environment is named <scooby>.
Creating initial configuration for primary boot environment <scooby>.
The device </dev/dsk/c4d0s3> is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name <scooby> PBE Boot Device </dev/dsk/c4d0s3>.
Comparing source boot environment <scooby> file systems with the file 
system(s) you specified for the new boot environment. Determining which 
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment <doo>.
Source boot environment is <scooby>.
Creating boot environment <doo>.
Cloning file systems from boot environment <scooby> to create boot environment <doo>.
Creating snapshot for <rpool/ROOT/scooby> on <rpool/ROOT/scooby@doo>.
Creating clone for <rpool/ROOT/scooby@doo> on <rpool/ROOT/doo>.
Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/doo>.
Saving existing file </boot/grub/menu.lst> in top level dataset for BE <doo> as <mount-point>//boot/grub/menu.lst.prev.
File </boot/grub/menu.lst> propagation successful
Copied GRUB menu from PBE to ABE
No entry for BE <doo> in GRUB menu
Population of boot environment <doo> successful.
Creation of boot environment <doo> successful.

# luactivate doo
System has findroot enabled GRUB
Generating boot-sign, partition and slice information for PBE 

File  deletion successful
File  deletion successful
File  deletion successful
Activation of boot environment  successful.

# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
scooby                     yes      yes    no        no     -         
doo                        yes      no     yes       no     -        
Pretty cool, eh ?

There are still a few more interesting corner cases, but we will deal with those in the one of the next articles. In the mean time, please remember to
  • Check Infodoc 206844 for Live Upgrade patch requirements
  • Keep your patching and package utilities updated
  • Use luactivate to switch between boot environments


Technocrati Tags: <script type="text/javascript"> var sc_project=1193495; var sc_invisible=1; var sc_security="a46f6831"; </script> <script type="text/javascript" src="http://www.statcounter.com/counter/counter.js"></script>

Tuesday May 19, 2009

This is why you do after run maintenance

Do you wonder why one racer's truck is always faster and handles better than yours ?

It is often the little things, like regular maintenance. Off road trucks take a tremendous amount of punishment, especially if you are prone to running into curbs or cartwheeling off a ramp. At the end of every session you should give your truck a thorough examination and replace parts like this bent hinge pin.

Why you do after run maintenance 03 - Bent hinge pin



04 - Straight hinge pin
Click image to enlargeClick image to enlarge 


It may not seem like much but this hinge pin will cause the steering block to bind in the caster block which will will impact the handling of the truck. This will be more pronounced coming out of a turn but it may even prevent your truck from keeping a straight line. If this is a four wheel drive truck you may experience a lack of power when turning. You will also be creating additional work for your steering servo which can lead to shorter run times and premature failure of the servo itself. And this is completely unnecessary. A replacement hinge pin is a $2 part and takes just a few seconds to replace it.

Before you invest in an expensive brushless power plant and high capacity energy source, do the little things.

Get your truck ready for the stresses of a more powerful motor.
  • Reduce rolling friction everywhere (large bearings on the axle carries and steering blocks)
  • Stronger lower suspension arms
  • Aluminum caster blocks to prevent bending your hinge pins in the future
  • Steel transmission and differential gears
  • Heavy duty output drives
  • Large front and rear bumpers to protect suspension parts and your new motor
  • Heavy steel steering links, and if available adjustable steel camber turnbuckles
  • Heavy duty aluminum shock towers (as the stock ones fail)
  • Get a range of pinion and spur gears (and start at the low end of the recommended gearing range and move up if the motor, esc and battery stay cool)
Once you have done these upgrades, then get yourself a Mamba Max or Novak Havoc system. Once you have run a brushless system with lipo batteries you will never go back to brushed motors. I'm even converting my 1/18 scale trucks to brushless, as time and repairs permit.

And at the end of every race or afternoon of bashing
  • Thoroughly inspect your truck for damage and replace all broken parts
  • Clean dirt away from electronics (radio, speed control, servos)
  • Make sure your suspension moves freely
  • Replace any bent screws or hinge pins
  • Tighten screws that have become loose - be careful with plastic parts as they can easily strip
  • Oil your motor bearings, and if brushed clean with motor spray and check your brushes and commutator
This will keep your truck in top running form and fun for years to come.

Technocrati Tags:

Tuesday Mar 17, 2009

Time-slider saves the day (or at least a lot of frustration)

As I was tidying up my Live Upgrade boot environments yesterday, I did something that I thought was terribly clever but had some pretty wicked side effects. While linking up all of my application configuration directories (firefox, mozilla, thunderbird, [g]xine, staroffice) I got blindsided by the GNOME message client: pidgin, or more specifically one of our migration assistants from GAIM to pidgin.

As a quick background, Solaris, Solaris Express Community Edition (SXCE), and OpenSolaris all have different versions of the GNOME desktop. Since some of the configuration settings are incompatible across releases the easy solution is to keep separate home directories for each version of GNOME you might use. Which is fine until you grow weary of setting your message filters for Thunderbird again or forget which Firefox has that cached password for the local recreation center that you only use once a year. Pretty quickly you come up with the idea of a common directory for all shared configuration files (dot directories, collections of pictures, video, audio, presentations, scripts).

For one boot environment you do something like
$ mkdir /export/home/me
$ for dotdir in .thunderbird .purple .mozilla .firefox .gxine .xine .staroffice .wine .staroffice\* .openoffice\* .VirtualBox .evolution bin lib misc presentations 
> do
> mv $dotdir /export/home/me
> ln -s /export/home/me/$dotdir   $dotdir
> done
And for the other GNOME home directories you do something like
$ for dotdir in .thunderbird .purple .mozilla .firefox .gxine .xine .staroffice .wine .staroffice\* .openoffice\* .VirtualBox .evolution bin lib misc presentations 
> do
> mv $dotdir ${dotdir}.old
> ln -s /export/home/me/$dotdir   $dotdir
> done
And all is well. Until......

Booted into Solaris 10 and fired up pidgin thinking I would get all of my accounts activated and the default chatrooms started. Instead I was met by this rather nasty note that I had incompatible GAIM entries and it would try to convert them for me. What it did was wipe out all of my pidgin settings. And sure enough when I look into the shared directory, .purple contained all new and quite empty configuration settings.

This is where I am hoping to get some sympathy, since we have all done things like this. But then I remembered I had started time-slider earlier in the day (from the OpenSolaris side of things).
$ time-slider-setup
And there were my .purple files from 15 minutes ago, right before the GAIM conversion tools made a mess of them.
$ cd /export/home/.zfs/snapshot
$ ls
zfs-auto-snap:daily-2009-03-16-22:47
zfs-auto-snap:daily-2009-03-17-00:00
zfs-auto-snap:frequent-2009-03-17-11:45
zfs-auto-snap:frequent-2009-03-17-12:00
zfs-auto-snap:frequent-2009-03-17-12:15
zfs-auto-snap:frequent-2009-03-17-12:30
zfs-auto-snap:hourly-2009-03-16-22:47
zfs-auto-snap:hourly-2009-03-16-23:00
zfs-auto-snap:hourly-2009-03-17-00:00
zfs-auto-snap:hourly-2009-03-17-01:00
zfs-auto-snap:hourly-2009-03-17-02:00
zfs-auto-snap:hourly-2009-03-17-03:00
zfs-auto-snap:hourly-2009-03-17-04:00
zfs-auto-snap:hourly-2009-03-17-05:00
zfs-auto-snap:hourly-2009-03-17-06:00
zfs-auto-snap:hourly-2009-03-17-07:00
zfs-auto-snap:hourly-2009-03-17-08:00
zfs-auto-snap:hourly-2009-03-17-09:00
zfs-auto-snap:hourly-2009-03-17-10:00
zfs-auto-snap:hourly-2009-03-17-11:00
zfs-auto-snap:hourly-2009-03-17-12:00
zfs-auto-snap:monthly-2009-03-16-11:38
zfs-auto-snap:weekly-2009-03-16-22:47

$ cd zfs-auto-snap:frequent-2009-03-17-12:15/me/.purple
$ rm -rf /export/home/me/.purple/\*
$ cp -r \* /export/home/me/.purple

(and this is is really really important)
$ mv $HOME/.gaim $HOME/.gaim-never-to-be-heard-from-again

Log out and back in to refresh the GNOME configuration settings and everything is as it should be. OpenSolaris time-slider is just one more reason that I'm glad that it is my daily driver.

Technocrati Tags:

Monday Mar 02, 2009

Alaska and Oregon Solaris Boot Camps

A big thanks to all who attended the Solaris Boot Camps in Juneau, Fairbanks, Portland and Salem. I hope that you found the information useful. And thanks for all of the good questions and discussion.

Here are the materials that were used during the bootcamp.

Please send me email if you have any questions or want to follow up on any of the discussions.

Thanks again for your attendance and continued support for Solaris.

Technocrati Tags:

Tuesday Nov 11, 2008

OpenSolaris 2008.11 Release Candidate 1B (nv101a) is now available for testing

The initial release candidate (rc1b) for OpenSolaris 2008.11 (based on nv101a) is now available for download and testing. Additional (larger) images are available for non-English locales as well as USB images for faster installs. If you have not played with a USB image you will be dazzled at the speed of the installation. Amazing what happens when you eliminate all those slow seeks.

The new release candidate has quite a few interesting features and updates. The items that caught my attention were
  • IPS Package Manager
  • Automatically cloning root file system (beadm clone) during image update
  • GNOME 2.24
  • Evolution 2.24 for those of us that are stubborn enough to continue using it
  • OpenOffice 3.0
  • Songbird - an iTunes-like media player. Still needs lots of codecs (like the free Fluendo MP3 decoder) to be really useful
  • Brasero - a Nero-like media burner
Our own Dan Roberts has more to say on the subject in this video podcast.

Using the graphical package manager it only took a few minutes to set up the installation plan for a nice web based development system including Netbeans, a web stack (including Glassfish), and a Xen based virtualization system.

OpenSolaris 2008.11 is shaping up to be quite a nice release. Now that I have figured out how to make it play nicely in a root zpool with other Solaris releases, I will be spending a lot more time with it as the daily driver.

Download it, play with it, and please remember to file bugs when you run into things that don't work.

Technocrati Tags:

Tuesday Nov 04, 2008

Solaris and OpenSolaris coexistence in the same root zpool

Some time ago, my buddy Jeff Victor gave us FrankenZone. An idea that is disturbingly brilliant. It has taken me a while, but I offer for your consideration VirtualBox as a V2P platform for OpenSolaris. Nowhere near as brilliant, but at least as unusual. And you know that you have to try this out at home.

Note: This is totally a science experiment. I fully expect to see the two guys from Myth Busters showing up at any moment. It also requires at least build 100 of OpenSolaris on both the host and guest operating system to work around the hostid difficulties.

With the caveats out of the way, let me set the back story to explain how I got here.

Until virtualization technologies become ubiquitous and nothing more than BIOS extensions, multi-boot configurations will continue to be an important capability. And for those working with [Open]Solaris there are several limitations that complicate this unnecessarily. Rather than lamenting these, the possibility of leveraging ZFS root pools, now in Solaris 10 10/08, should offer up some interesting solutions.

What I want to do is simple - have a single Solaris fdisk partition that can have multiple versions of Solaris all bootable with access to all of my data. This doesn't seem like much of a request, but as of yet this has been nearly impossible to accomplish in anything close to a supportable configuration. As it turns out the essential limitation is in the installer - all other issues can be handled if we can figure out how to install OpenSolaris into an existing pool.

What we will do is use our friend VirtualBox to work around the installer issues. After installing OpenSolaris in a virtual machine we take a ZFS snapshot, send it to the bare metal Solaris host and restore it in the root pool. Finally we fix up a few configuration files to make everything work and we will be left with a single root pool that can boot Solaris 10, Solaris Express Community Edition (nevada), and OpenSolaris.

How cool is that :-) Yeah, it is that cool. Let's proceed.

Prepare the host system

The host system is running a fresh install of Solaris 10 10/08 with a single large root zpool. In this example the root zpool is named panroot. There is also a separate zpool that contains data that needs to be preserved in case a re-installation of Solaris is required. That zpool is named pandora, but it doesn't matter - it will be automatically imported in our new OpenSolaris installation if all goes well.
# lustatus 
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
s10u6_baseline             yes      no     no        yes    -         
s10u6                      yes      no     no        yes    -         
nv95                       yes      yes    yes       no     -         
nv101a                     yes      no     no        yes    -    

     
# zpool list
NAME      SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
pandora  64.5G  56.9G  7.61G    88%  ONLINE  -
panroot    40G  26.7G  13.3G    66%  ONLINE  -
One challenge that came up was the less than stellar performance of ssh over the VirtualBox NAT interface. So rather than fight this I set up a shared NFS file system in the root pool to stage the ZFS backup file. This made the process go much faster.

In the host Solaris system
# zfs create -o sharenfs=rw,anon=0 -o mountpoint=/share panroot/share

Prepare the OpenSolaris virtual machine

If you have not already done so, get a copy of VirtualBox, install it and set up a virtual machine for OpenSolaris.

Important note: Do not install the VirtualBox guest additions. This will install some SMF services that will fail when booted on bare metal.

Send a ZFS snapshot to the host OS root zpool

Let's take a look around the freshly installed OpenSolaris system to see what we want to send.

Inside the OpenSolaris virtual machine
bash-3.2$ zfs list
NAME                     USED  AVAIL  REFER  MOUNTPOINT
rpool                   6.13G  9.50G    46K  /rpool
rpool/ROOT              2.56G  9.50G    18K  legacy
rpool/ROOT/opensolaris  2.56G  9.50G  2.49G  /
rpool/dump               511M  9.50G   511M  -
rpool/export            2.57G  9.50G  2.57G  /export
rpool/export/home        604K  9.50G    19K  /export/home
rpool/export/home/bob    585K  9.50G   585K  /export/home/bob
rpool/swap               512M  9.82G   176M  -
My host system root zpool (panroot) already has swap and dump, so these won't be needed. And it also has an /export hierarchy for home directories. I will recreate my OpenSolaris Primary System Administrator user once on bare metal, so it appears the only thing I need to bring over is the root dataset itself.

Inside the OpenSolaris virtual machine
bash-3.2$ pfexec zfs snapshot rpool/ROOT/opensolaris@scooby
bash-3.2$ pfexec zfs send rpool/ROOT/opensolaris@scooby > /net/10.0.2.2/share/scooby.zfs
We are now done with the virtual machine. It can be shut down and the storage reclaimed for other purposes.

Restore the ZFS dataset in the host system root pool

In addition to restoring the OpenSolaris root pool, the canmount property should be set to noauto. I also destroy the NFS shared directory since it will no longer be needed.
# zfs receive panroot/ROOT/scooby < /share/scooby.zfs
# zfs set canmount=noauto panroot/ROOT/scooby
# zfs destroy panroot/shared
Now mount the new OpenSolaris root filesystem and fix up a few configuration files. Specifically
  • /etc/zfs/zpool.cache so that all boot environments have the same view of available ZFS pools
  • /etc/hostid to keep all of the boot environments using the same hostid. This is extremely important and failure to do this will leave some of your boot environments unbootable - which isn't very useful. /etc/hostid is new to build 100 and later.
Rebuild the OpenSolaris boot archive and we will be done with that filesystem.
# zfs set canmount=noauto panroot/ROOT/scooby
# zfs set mountpoint=/mnt panroot/ROOT/scooby
# zfs mount panroot/ROOT/scooby

# cp /etc/zfs/zpool.cache /mnt/etc/zfs
# cp /etc/hostid /mnt/etc/hostid

# bootadm update-archive -f -R /mnt
Creating boot_archive for /mnt
updating /mnt/platform/i86pc/amd64/boot_archive
updating /mnt/platform/i86pc/boot_archive

# umount /mnt
Make a home directory for your OpenSolaris administrator user (in this example the user is named admin). Also add a GRUB stanza so that OpenSolaris can be booted.
# mkdir -p /export/home/admin
# chown admin:admin /export/home/admin
# cat > /panroot/boot/grub/menu.lst   <<DOO
title Scooby
root (hd0,3,a)
bootfs panroot/ROOT/scooby
kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS
module$ /platform/i86pc/$ISADIR/boot_archive
DOO
At this point we are done. Reboot the system and you should see a new GRUB stanza for our new OpenSolaris installation (scooby). Cue large audience applause track.

Live Upgrade and OpenSolaris Boot Environment Administration

On interesting side effect, on the positive side, is the healthy interaction of Live Upgrade and beadm(1M). For your Solaris and nevada based installations you can continue to use lucreate(1M), luupgrade(1M), and luactivate(1M). On the OpenSolaris side you can see all of your Live Upgrade boot environments as well as your OpenSolaris boot environments. Note that we can create and activate new boot environments as needed.

When in OpenSolaris
# beadm list
BE                           Active Mountpoint Space   Policy Created          
--                           ------ ---------- -----   ------ -------          
nv101a                       -      -          18.17G  static 2008-11-04 00:03 
nv95                         -      -          122.07M static 2008-11-03 12:47 
opensolaris                  -      -          2.83G   static 2008-11-03 16:23 
opensolaris-2008.11-baseline R      -          2.49G   static 2008-11-04 11:16 
s10u6                        -      -          97.22M  static 2008-11-03 12:03 
s10x_u6wos_07b               -      -          205.48M static 2008-11-01 20:51 
scooby                       N      /          2.61G   static 2008-11-04 10:29 

# beadm create doo
# beadm activate doo
# beadm list
BE                           Active Mountpoint Space   Policy Created          
--                           ------ ---------- -----   ------ -------          
doo                          R      -          5.37G   static 2008-11-04 16:23 
nv101a                       -      -          18.17G  static 2008-11-04 00:03 
nv95                         -      -          122.07M static 2008-11-03 12:47 
opensolaris                  -      -          25.5K   static 2008-11-03 16:23 
opensolaris-2008.11-baseline -      -          105.0K  static 2008-11-04 11:16 
s10u6                        -      -          97.22M  static 2008-11-03 12:03 
s10x_u6wos_07b               -      -          205.48M static 2008-11-01 20:51 
scooby                       N      /          2.61G   static 2008-11-04 10:29 

For the first time I have a single Solaris disk environment that can boot Solaris 10, Solaris Express Community Edition (nevada) or OpenSolaris and have access to all of my data. I did have to add a mount for my shared FAT32 file system (I have an iPhone and several iPods - so Windows do occasionally get opened), but that is about it. Now off to the repository to start playing with all of the new OpenSolaris goodies like Songbird, Brasero, Bluefish and the Xen bits.

Technocrati Tags:

Tuesday Sep 23, 2008

LDOMs or Containers, that is the question....

An often asked question, do I put my application in a container (zone) or an LDOM ? My question in reply is why the or ? The two technologies are not mutually exclusive, and in practice their combination can yield some very interesting results. So if it is not an or, under what circumstances would I apply each of the technologies ? And does it matter if I substitute LDOMs with VMware, Xen, VirtualBox or Dynamic System Domains ? In this context all virtual machine technologies are similar enough to treat them as a class, so we will generalize to zones vs virtual machines for the rest of this discussion.

First to the question of zones. All applications in Solaris 10 and later should be deployed in zones with the following exceptions
  • The restricted set of privileges in a zone will not allow the application to operate correctly
  • The application interacts with the kernel in an intimate fashion (reads or writes kernel data)
  • The application loads or unloads kernel modules
  • There is a higher level virtualization or abstraction technology in use that would obviate any benefits from deploying the application in a zone
Presented a different way, if the security model allows the application to run and you aren't diminishing the benefits of a zone, deploy in a zone.

Some examples of applications that have difficulty with the restrictive privileges would be security monitoring and auditing, hardware monitoring, storage (volume) management software, specialized file systems, some forms of application monitoring, intrusive debugging and inspection tools that use the kernel facilities such as the DTrace FBT provider. With the introduction of configurable zone privileges in Solaris 10 11/06, the number of applications that fit into this category should be few in number, highly specialized and not the type of application that you would want to deploy in a zone.

For the higher level abstraction exclusion, think of something at the application layer that tries to hide the underlying platform. The best example would be Oracle RAC. RAC abstracts the details of the platform so that it can provide continuously operating database services. It also has the characteristic that it is itself a consolidation platform with some notion of resource controls. Given the complexity associated with RAC, it would not be a good idea to consolidate non-RAC workloads on a RAC cluster. And since zones are all about consolidation, RAC would trump zones in this case.

There are other examples such as load balancers and transaction monitors. These are typically deployed on smaller horizontally scalable servers to provide greater bandwidth or increases service availability. Although they do not provide consolidation services, their sophisticated availability features might not interact well with the nonglobal zone restrictive security model. High availability frameworks such as SunCluster do work well with zones. Zones abstract applications in such a way that service failover configurations can be significantly simplified.



Unless your application falls under one of these exemptions, the application should be deployed in a zone.

What about virtual machines ? This type of abstraction is happening at a much lower level, in this case hardware resources (processors, memory, I/O). In contrast, zones abstract user space objects (processes, network stacks, resource controls). Virtual machines allow greater flexibility in running many types and versions of operating systems and applications but also eliminates many opportunities to share resources efficiently.

Where would I use virtual machines ? Where you need the diversity of multiple operating systems. This can be different types of operating system (Windows, Linux, Solaris) or different versions or patch levels of the same operating system. The challenge here is that large sites can have servers at many different patch and update versions, not by design but as a result of inadequate patching and maintenance tools. Enterprise patch management tools (xVM OpsCenter), patch managers (PCA), or automated provisioning tools (OpsWare) can help reduce the number of software combinations and online maintenance using Live Upgrade can reduce the time and effort required to maintain systems.

It is important to understand that zones are not virtual machines. Their differences and the implications of this are
  • Zones provide application isolation on a shared kernel
  • Zones share resources very efficiently (shared libraries, system caches, storage)
  • Zones have a configurable and restricted set of privileges
  • Zones allow for easy application of resource controls even in a complex dynamic application environment
  • Virtual machines provide relatively complete isolation between operating systems
  • Virtual machines allow consolidation of many types and versions of operating systems
  • Although virtual machines may allow oversubscription of resources, they provide very few opportunities to share critical resources
  • An operating system running in a virtual machine can still isolate applications using zones.
And it is that last point that carries this conversation a bit farther. If the decision between zones and virtual machines isn't an or, under what conditions would it be an and, and what sort of benefit can be expected ?

Consider the case of application consolidation. Suppose you have three applications: A, B and C. If they are consolidated without isolation then system maintenance becomes cumbersome as you can only patch or upgrade when all three application owners agree. Even more challenging is the time pressure to certify the newly patched or upgraded environment due to the fact that you have to test three things instead of one. Clearly isolation is a benefit in this case, and it is a persistent property (once isolated, forever isolated).

Isolation using zones alone will be very efficient but there will be times when the common shared kernel will be inconvenient - approaching the problems of the non-isolated case. Isolation using virtual machines is simple and very flexible but comes with a cost that might be unnecessary.

So why not do both ? Use zones to isolate the applications and use virtual machines for those times when you cannot support all of the applications with a common version of the operating system. In other words the isolation is a persistent property and the need for heterogeneous operating systems is temporary and specific. With some small improvements in the patching and upgrade tools, the time frame when you need heterogeneous operating systems can be reduced.

Using our three applications as an example, A B and C are deployed in separate zones on a single system image, bare metal or in a virtual machine. Everything is operating spectacularly until a new OS upgrade is available which provides some important new functionality for application A. So application owner A wants to upgrade immediately, application B doesn't care one way or the other, and (naturally) application C has just gone into seasonal lock-down and cannot be altered for the rest of the year.

Using zones and virtual machines provides a unique solution. Provision a new virtual machine with the new operating system software, either on the same platform by reassigning resources (CPU, memory) or on a separate platform. Next clone the zone running application A. Detach the newly cloned zone and migrate it to the new virtual machine. A new feature in Solaris 10 10/08 will automatically upgrade the new zone upon attachment to a server running newer software. Leave the original zone alone for some period of time in the event that an adverse regression appears that would force you to revert to the original version. Eventually the original zone can be reclaimed, but at a time when convenient.

Migrate the other two applications at a convenient time using the same procedure. When all of the applications have been migrated and you are comfortable that they have been adequately tested, the old system image can be shut down and any remaining resources can be reclaimed for other purposes. Zones as the sole isolation agent cannot do this and virtual machines by themselves will require more administrative effort and higher resource consumption during the long periods when you don't need different versions of the operating system. Combined you get the best of both features.

A less obvious example is ISV licensing. Consider the case of Oracle. Our friends at Oracle consider the combination of zones and capped resource controls as a hard partition method which allows you to license their software to the size of the resource cap, not the server. If you put Oracle in a zone on a 16 core system with a resource cap of 2 cores, you only pay for 2 cores. They have also made similar considerations for their Xen based Oracle VM product yet have been slow to respond to other virtual machine technologies. Zones to the rescue. If you deploy Oracle in a VM on a 16 core server you pay for all 16 cores. If you put that same application in a zone, in the same VM but cap the zone at 4 cores then you only pay for 4 cores.

Zones are all about isolation and application of resouce controls. Virtual machines are all about heterogeneous operating systems. Use zones to persistently isolate applications. Use virtual machines during the times when a single operating system version is not feasible.

This is only the beginning of the conversation. A new Blueprint based on measured results from some more interesting use cases is clearly needed. Jeff Savit, Jeff Victor and I will be working on this over the next few weeks and I'm sure that we will be blogging with partial results as they become available. As always, questions and suggestions are welcome. <script type="text/javascript"> var sc_project=1193495; var sc_invisible=1; var sc_security="a46f6831"; </script> <script type="text/javascript" src="http://www.statcounter.com/counter/counter.js"></script>
About

Bob Netherton is a Principal Sales Consultant for the North American Commercial Hardware group, specializing in Solaris, Virtualization and Engineered Systems. Bob is also a contributing author of Solaris 10 Virtualization Essentials.

This blog will contain information about all three, but primarily focused on topics for Solaris system administrators.

Please follow me on Twitter Facebook or send me email

Search

Archives
« September 2015
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today