Monday Sep 07, 2009

Creating an iPhone Custom Ringtone



As a long time Palm user, the transition to the iPhone had more than it's share of difficulties. Thanks to some software updates (cut and paste) and VirtualBox, assimilation is complete and I have become a big fan of the little device. With that as a background, let's explore creating custom ringtones from items in your music library. And contrary to what iTunes tells you, it is possible to create custom ringtones from any DRM-free content in your music library - not just the items you purchased from the iTunes store.

1. Choosing the proper ringtone

Everybody in your contact list deserves their own distinctive ringtone - or at least the ones that actually call you. One word of warning though: make sure you choose an appropriate audio source and consider what happens if they accidently dial your phone while standing next to you.

True story: My first custom ringtone was for my spouse of 26 (and counting) years. We don't really have an "our song" so I opted for something instantly recognizable. In 5 notes or less. Deep Purple's Smoke on the Water is too obvious and pedestrian. What's the next best riff ? Think Ronnie Montrose.

As I am creating the ringtone, a buddy at work notices what I am doing and taps me on the shoulder. "Dude, are you like wanting to make her mad ? Are you nuts - do you only want to live with half of your stuff ?"

I guess my quizzical expression indicated that I needed further explanation.

"Frankenstein. Dude, you chose Frankenstein. ..... Frankenstein ???? Get it ?"

"Oh. OH!!!! I get it"

While Ronnie Montrose's opening riff from Edgar Winter's Frankenstein is one of the most recognizable in the rock catalog, it is clearly an inappropriate ringtone for a loved one. My bride has a great sense of humor, but I doubt that it would extend to this.

Please learn from my near fatal mistake - choose a good ringtone. In my example I will use Brainstorm from Hawkwind's third album, Doremi Fasol Latido. Another great opening riff. And no, I will not use this for my spouse, daughter, boss or any other person real or fictional.

2. Edit the source for your ringtone

A ringtone needs to be a short AAC encoded audio file. It should be at least 10 seconds in length and no longer than 30 seconds. If you are a computer wizard this is not a difficult task. Then again, if you are you wouldn't be reading this howto. Fortunately for the rest of us, iTunes can do this quite nicely.

After listening to Brainstorm, I have decided to use the first 25 seconds for my ringtone.

Right click the song you want to use for your ringtone - Brainstorm in this example.

Select Get Info and then click the Options tab.

Fill in the start and stop times in the boxes as I have on the left.

Note: The start and stop times do not have to be in whole seconds. Play with these numbers a bit of you want to cut out something like a voice or drum beat. A few tenths of a second can make a big difference.

Click OK and you will have an edited sound source that we will use to make our new ringtone.
Click image to enlarge 

Create an AAC encoded audio file



Right click your edited song and look for "Create AAC Version". If it is there skip over the next few steps.

If you see Create MP3 Version instead, don't panic. A lot of the other howtos that I found skipped this - and it's not exactly obvious what to do next.

The Create setting is based on your CD Import settings. The iTunes default is AAC which is correct for creating ringtones. It is also less than desirable if you want to get the most out of your iPod listening. Or (gasp) if you want to listen to your audio files with anything other than (another gasp) iTunes.

If you see Create MP3 Version as on the left all you need to do is change your CD Import preferences. Click Edit -> Preferences and while on the General tab click Import Settings.
Click image to enlarge 


Take note of your current settings. You will want to change them back when you are done.

Select AAC Encoder in the Import Using: drop box.

Since the internal speaker in the iPhone isn't exactly high fidelity, the encoding rate isn't important. Click OK and we are ready to make the audio file that will eventually become our ringtone.
Click image to enlarge 

Create an AAC encoded audio file - this time we mean it





Now, right click your edited audio souce and select Create AAC Version. You should see iTunes start the encoder and in a few seconds you will have another copy of your song. The length of the ringtone will be shown in the Time column.

Important: Before you forget, and you will - trust me, go back to the original song and clear out the start and stop times. Unless you really like listening to your ringtone in your iPod.

Select the original song, right click for the song menu, select Get Info, click the Options tab and uncheck Start Time and Stop Time.
Click image to enlarge 

What's in a name - quite a lot actually

The next step is to copy the new audio file to your Desktop where we will rename it. Click and drag the copy of the song you just created to your desktop. This should create a new icon, which is really a file in your Desktop directory. The filetype is .m4a which associates it with an iTunes song (audio file). What we need to do is rename it to something with a filetype of .m4r.

Before you forget, delete the audio file you created from your iTunes library. It is of no further use.

Now open up a command window. Click Start -> All Programs -> Accessories -> Command Prompt or Start -> Run Program and enter cmd. Either way works.

In the command shell, change your directory down one level to Desktop and rename the audio file from a .m4a to .m4r The icon on your desktop should immediately change from an audio to ring.
Click image to enlarge 


If it is not clear from the above screenshot, the command shell dialog went something like this. My commands are in bold
Microsoft Windows XP [Version 5.1.2600]
(C) Copyright 1985-2001 Microsoft Corp.

C:\\Documents and Settings\\bob>cd Desktop

C:\\Documents and Settings\\bob\\Desktop>dir
 Volume in drive C has no label.
 Volume Serial Number is 2CBE-1F9C

 Directory of C:\\Documents and Settings\\bob\\Desktop

09/06/2009  02:07 PM    <DIR>        .
09/06/2009  02:07 PM    <DIR>        ..
09/06/2009  02:02 PM              1,072,142  01 Brainstorm.m4a
               1 File(s)           1,072,142 bytes
               2 Dir(s)       10,997,751,040 free

C:\\Documents and Settings\\bob\\Desktop>ren "01 Brainstorm.m4a" "Brainstorm.m4r"
C:\\Documents and Settings\\bob\\Desktop>exit

Copy your new ringtone into iTunes



Drag your newly renamed ringtone into iTunes. You should now see it in the Ringtones part of your library.

If you have set up your iTunes to copy all files into your library (which should be the default), you can delete the icon from your Desktop.

Click image to enlarge 
This procedure should work with any non-DRM audio file in your iTunes music library. Now everyone in your contact list can have their own custom ringtone.

Tuesday Jun 30, 2009

VirtualBox 3.0 is now available

VirtualBox 3.0 has been released and is now available for download. You can get binaries for Windows, OS X (Intel Mac), Linux and Solaris hosts at http://www.virtualbox.org/wiki/Downloads

Version 3.0 is a major update and contains the following new features
  • SMP guest machines, up to 32 virtual CPUs: requires Intel VT-x or AMD-V
  • Experimental support for Direct3D 8/9 in Windows guests: great for games and multimedia applications
  • Support for OpenGL 2.0 for Windows, Linux and Solaris guests

There is also a long list of bugs fixed in the new release. Please see the Changelog for a complete list.


In my early testing of VirtualBox 3.0 I have been most impressed by the SMP performance as well as the Direct3D passthrough from the guest. There are now some games that I can play in a guest virtual machine that I could not previously. Thanks to the VirtualBox development team for another great release.

Technocrati Tags:

Wednesday Jun 17, 2009

VirtualBox 3.0 Beta Release 1 is now available for testing

The first beta release of VirtualBox 3.0 is now available. You can download the binaries at http://download.virtualbox.org/virtualbox/3.0.0_BETA1/

Version 3.0 will be a major update. The following major new features were added:
  • Guest SMP with up to 32 virtual CPUs (VT-x and AMD-V only)
  • Windows guests: ability to use Direct3D 8/9 applications / games (experimental)
  • Support for OpenGL 2.0 for Windows, Linux and Solaris guests

In addition, the following items were fixed and/or added:
  • Virtual mouse device: eliminated micro-movements of the virtual mouse which were confusing some applications (bug #3782)
  • Solaris hosts: allow suspend/resume on the host when a VM is running (bug #3826)
  • Solaris hosts: tighten the restriction for contiguous physical memory under certain conditions
  • VMM: fixed occassional guru meditation when loading a saved state (VT-x only)
  • VMM: eliminated IO-APIC overhead with 32 bits guests (VT-x only, some Intel CPUs don’t support this feature (most do); bug #638)
  • VMM: fixed 64 bits CentOS guest hangs during early boot (AMD-V only; bug #3927)
  • VMM: performance improvements for certain PAE guests (e.g. Linux 2.6.29+ kernels)
  • GUI: added mini toolbar for fullscreen and seamless mode (Thanks to Huihong Luo)
  • GUI: redesigned settings dialogs
  • GUI: allow to create/remove one host-only network adapters
  • GUI: display estimated time for long running operations (e.g. OVF import/export)
  • GUI: Fixed rare hangs when open the OVF import/export wizards (bug #4157)
  • VRDP: support Windows 7 RDP client
  • Networking: fixed another problem with TX checksum offloading with Linux kernels up to version 2.6.18
  • VHD: properly write empty sectors when cloning of VHD images (bug #4080)
  • VHD: fixed crash when discarding snapshots of a VHD image
  • VBoxManage: fixed incorrect partition table processing when creating VMDK files giving raw partition access (bug #3510)
  • OVF: several OVF 1.0 compatibility fixes
  • Shared Folders: sometimes a file was created using the wrong permissions (2.2.0 regression; bug #3785)
  • Shared Folders: allow to change file attributes from Linux guests and use the correct file mode when creating files
  • Shared Folders: fixed incorrect file timestamps, when using Windows guest on a Linux host (bug #3404)
  • Linux guests: new daemon vboxadd-service to handle time syncronization and guest property lookup
  • Linux guests: implemented guest properties (OS info, logged in users, basic network information)
  • Windows host installer: VirtualBox Python API can now be installed automatically (requires Python and Win32 Extensions installed)
  • USB: Support for high-speed isochronous endpoints has been added. In addition, read-ahead buffering is performed for input endpoints (currently Linux hosts only). This should allow additional devices to work, notably webcams
  • NAT: allow to configure socket and internal parameters
  • Registration dialog uses Sun Online accounts now

Please do not use this VirtualBox Beta release on production machines. A VirtualBox Beta release should be considered a bleeding-edge release meant for early evaluation and testing purposes.

Please use our 'VirtualBox Beta Feedback' forum at http://forums.virtualbox.org/viewforum.php?f=15 to report any problems with the Beta.

Technocrati Tags:

Thursday May 21, 2009

Getting Rid of Pesky Live Upgrade Boot Environments

As we discussed earlier, Live Upgrade can solve most of the problems associated with patching and upgrading your Solaris system. I'm not quite ready to post the next installment in the LU series quite yet, but from some of the comments and email I have received, there are two problems that I would like to help you work around.

Oh where oh where did that file system go ?

One thing you can do to stop Live Upgrade in its tracks is to remove a file system that it thinks another boot environment needs. This does fall into the category of user error, but you are more likely to run into this in a ZFS world where file systems can be created and destroyed with great ease. You will also run into a varient of this if you change your zone configurations without recreating your boot environment, but I'll save that for a later day.

Here is our simple test case:
  1. Create a ZFS file system.
  2. Create a new boot environment.
  3. Delete the ZFS file system.
  4. Watch Live Upgrade fail.

# zfs create arrakis/temp

# lucreate -n test
Checking GRUB menu...
System has findroot enabled GRUB
Analyzing system configuration.
Comparing source boot environment <s10u7-baseline> file systems with the
file system(s) you specified for the new boot environment. Determining
which file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment <test>.
Source boot environment is <s10u7-baseline>.
Creating boot environment <test>.
Cloning file systems from boot environment <s10u7-baseline> to create boot environment <test>.
Creating snapshot for <rpool/ROOT/s10u7-baseline> on <rpool/ROOT/s10u7-baseline@test>.
Creating clone for <rpool/ROOT/s10u7-baseline@test> on <rpool/ROOT/test>.
Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/test>.
Saving existing file </boot/grub/menu.lst> in top level dataset for BE <s10u6_baseline> as <mount-point>>//boot/grub/menu.lst.prev.
Saving existing file </boot/grub/menu.lst> in top level dataset for BE <test> as <mount-point>//boot/grub/menu.lst.prev.
Saving existing file </boot/grub/menu.lst> in top level dataset for BE <nv114> as <mount-point>//boot/grub/menu.lst.prev.
Saving existing file </boot/grub/menu.lst> in top level dataset for BE <route66> as <mount-point>//boot/grub/menu.lst.prev.
Saving existing file </boot/grub/menu.lst> in top level dataset for BE <nv95> as <mount-point>//boot/grub/menu.lst.prev.
File </boot/grub/menu.lst> propagation successful
Copied GRUB menu from PBE to ABE
No entry for BE <test> in GRUB menu
Population of boot environment <test> successful.
Creation of boot environment <test> successful.

# zfs destroy arrakis/temp

# luupgrade -t -s /export/patches/10_x86_Recommended-2009-05-14  -O "-d" -n test
System has findroot enabled GRUB
No entry for BE <test> in GRUB menu
Validating the contents of the media </export/patches/10_x86_Recommended-2009-05-14>.
The media contains 143 software patches that can be added.
All 143 patches will be added because you did not specify any specific patches to add.
Mounting the BE <test>.
ERROR: Read-only file system: cannot create mount point </.alt.tmp.b-59c.mnt/arrakis/temp>
ERROR: failed to create mount point </.alt.tmp.b-59c.mnt/arrakis/temp> for file system </arrakis/temp>
ERROR: unmounting partially mounted boot environment file systems
ERROR: cannot mount boot environment by icf file </etc/lu/ICF.5>
ERROR: Unable to mount ABE <test>: cannot complete lumk_iconf
Adding patches to the BE <test>.
Validating patches...

Loading patches installed on the system...

Cannot check name /a/var/sadm/pkg.
Unmounting the BE <test>.
The patch add to the BE <test> failed (with result code <1>).
The proper Live Upgrade solution to this problem would be to destroy and recreate the boot environment, or just recreate the missing file system (I'm sure that most of you have figured the latter part out on your own). The rationale is that the alternate boot environment no longer matches the storage configuration of its source. This was fine in a UFS world, but perhaps a bit constraining when ZFS rules the landscape. What if you really wanted the file system to be gone forever.

With a little more understanding of the internals of Live Upgrade, we can fix this rather easily.

Important note: We are about to modify undocumented Live Upgrade configuration files. The formats, names, and contents are subject to change without notice and any errors made while doing this can render your Live Upgrade configuration unusable.

The file system configurations for each boot environment are kept in a set of Internal Configuration Files (ICF) in /etc/lu named ICF.n, where n is the boot environment number. From the error message above we see that /etc/lu/ICF.5 is the one that is causing the problem. Let's take a look.
# cat /etc/lu/ICF.5
test:-:/dev/dsk/c5d0s1:swap:4225095
test:-:/dev/zvol/dsk/rpool/swap:swap:8435712
test:/:rpool/ROOT/test:zfs:0
test:/archives:/dev/dsk/c1t0d0s2:ufs:327645675
test:/arrakis:arrakis:zfs:0
test:/arrakis/misc:arrakis/misc:zfs:0
test:/arrakis/misc2:arrakis/misc2:zfs:0
test:/arrakis/stuff:arrakis/stuff:zfs:0

test:/arrakis/temp:arrakis/temp:zfs:0

test:/audio:arrakis/audio:zfs:0
test:/backups:arrakis/backups:zfs:0
test:/export:arrakis/export:zfs:0
test:/export/home:arrakis/home:zfs:0
test:/export/iso:arrakis/iso:zfs:0
test:/export/linux:arrakis/linux:zfs:0
test:/rpool:rpool:zfs:0
test:/rpool/ROOT:rpool/ROOT:zfs:0
test:/usr/local:arrakis/local:zfs:0
test:/vbox:arrakis/vbox:zfs:0
test:/vbox/fedora8:arrakis/vbox/fedora8:zfs:0
test:/video:arrakis/video:zfs:0
test:/workshop:arrakis/workshop:zfs:0
test:/xp:/dev/dsk/c2d0s7:ufs:70396830
test:/xvm:arrakis/xvm:zfs:0
test:/xvm/fedora8:arrakis/xvm/fedora8:zfs:0
test:/xvm/newfs:arrakis/xvm/newfs:zfs:0
test:/xvm/nv113:arrakis/xvm/nv113:zfs:0
test:/xvm/opensolaris:arrakis/xvm/opensolaris:zfs:0
test:/xvm/s10u5:arrakis/xvm/s10u5:zfs:0
test:/xvm/ub710:arrakis/xvm/ub710:zfs:0
The first step is to clean up the mess left by the failing luupgrade attempt. At the very least we will need to unmount the alternate boot environment root. It is also very likely that we will have to unmount a few temporary directories, such as /tmp and /var/run. Since this is ZFS we will also have to remove the directories created when these file systems were mounted.
# df -k | tail -3
rpool/ROOT/test      49545216 6879597 7546183    48%    /.alt.tmp.b-Fx.mnt
swap                 4695136       0 4695136     0%    /a/var/run
swap                 4695136       0 4695136     0%    /a/tmp

# luumount test
# umount /a/var/run
# umount /a/tmp
# rmdir /a/var/run /a/var /a/tmp

Next we need to remove the missing file system entry from the current copy of the ICF file. Use whatever method you prefer (vi, perl, grep). Once we have corrected our local copy of the ICF file we must propagate it to the alternate boot environment we are about to patch. You can skip the propagation if you are going to delete the boot environment without doing any other maintenance activities. The normal Live Upgrade operations will take care of propagating the ICF files to the other boot environments, so we should not have to worry about them at this time.
# mv /etc/lu/ICF.5 /tmp/ICF.5
# grep -v arrakis/temp /tmp/ICF.5 > /etc/lu/ICF.5 
# cp /etc/lu/ICF.5 `lumount test`/etc/lu/ICF.5
# luumount test
At this point we should be good to go. Let's try the luupgrade again.
# luupgrade -t -n test -O "-d" -s /export/patches/10_x86_Recommended-2009-05-14
System has findroot enabled GRUB
No entry for BE  in GRUB menu
Validating the contents of the media .
The media contains 143 software patches that can be added.
All 143 patches will be added because you did not specify any specific patches to add.
Mounting the BE <test>.
Adding patches to the BE <test>.
Validating patches...

Loading patches installed on the system...

Done!

Loading patches requested to install.

Approved patches will be installed in this order:

118668-19 118669-19 119214-19 123591-10 123896-10 125556-03 139100-02


Checking installed patches...
Verifying sufficient filesystem capacity (dry run method)...
Installing patch packages...

Patch 118668-19 has been successfully installed.
Patch 118669-19 has been successfully installed.
Patch 119214-19 has been successfully installed.
Patch 123591-10 has been successfully installed.
Patch 123896-10 has been successfully installed.
Patch 125556-03 has been successfully installed.
Patch 139100-02 has been successfully installed.

Unmounting the BE <test>.
The patch add to the BE <test> completed.
Now that the alternate boot environment has been patched, we can activate it at our convenience.

I keep deleting and deleting and still can't get rid of those pesky boot environments

This is an interesting corner case where the Live Upgrade configuration files get so scrambled that even simple tasks like deleting a boot environment are not possible. Every time I have gotten myself into this situation I can trace it back to some ill advised shortcut that seemed harmless at the time, but I won't rule out bugs and environment as possible causes.

Here is our simple test case: turn our boot environment from the previous example into a zombie - something that is neither alive nor dead but just takes up space and causes a mild annoyance.

Important note: Don't try this on a production system. This is for demonstration purposes only.
# dd if=/dev/random of=/etc/lu/ICF.5 bs=2048 count=2
0+2 records in
0+2 records out

# ludelete -f test
System has findroot enabled GRUB
No entry for BE <test> in GRUB menu
ERROR: The mount point </.alt.tmp.b-fxc.mnt> is not a valid ABE mount point (no /etc directory found).
ERROR: The mount point </.alt.tmp.b-fxc.mnt> provided by the <-m> option is not a valid ABE mount point.
Usage: lurootspec [-l error_log] [-o outfile] [-m mntpt]
ERROR: Cannot determine root specification for BE <test>.
ERROR: boot environment <test> is not mounted
Unable to delete boot environment.
Our first task is to make sure that any partially mounted boot environment is cleaned up. A df should help us here.
# df -k | tail -5
arrakis/xvm/opensolaris 350945280      19 17448377     1%    /xvm/opensolaris
arrakis/xvm/s10u5    350945280      19 17448377     1%    /xvm/s10u5
arrakis/xvm/ub710    350945280      19 17448377     1%    /xvm/ub710
swap                 4549680       0 4549680     0%    /.alt.tmp.b-fxc.mnt/var/run
swap                 4549680       0 4549680     0%    /.alt.tmp.b-fxc.mnt/tmp


# umount /.alt.tmp.b-fxc.mnt/tmp
# umount /.alt.tmp.b-fxc.mnt/var/run
Ordinarily you would use lufslist(1M) to try to determine which file systems are in use by the boot environment you are trying to delete. In this worst case scenario that is not possible. A bit of forensic investigation and a bit more courage will help us figure this out.

The first place we will look is /etc/lutab. This is the configuration file that lists all boot environments known to Live Upgrade. There is a man page for this in section 4, so it is somewhat of a public interface but please take note of the warning
 
        The lutab file must not be edited by hand. Any user  modifi-
        cation  to  this file will result in the incorrect operation
        of the Live Upgrade feature.
This is very good advice and failing to follow it has led some some of my most spectacular Live Upgrade meltdowns. But in this case Live Upgrade is already broken and it may be possible to undo the damage and restore proper operation. So let's see what we can find out.
# cat /etc/lutab
# DO NOT EDIT THIS FILE BY HAND. This file is not a public interface.
# The format and contents of this file are subject to change.
# Any user modification to this file may result in the incorrect
# operation of Live Upgrade.
3:s10u5_baseline:C:0
3:/:/dev/dsk/c2d0s0:1
3:boot-device:/dev/dsk/c2d0s0:2
1:s10u5_lu:C:0
1:/:/dev/dsk/c5d0s0:1
1:boot-device:/dev/dsk/c5d0s0:2
2:s10u6_ufs:C:0
2:/:/dev/dsk/c4d0s0:1
2:boot-device:/dev/dsk/c4d0s0:2
4:s10u6_baseline:C:0
4:/:rpool/ROOT/s10u6_baseline:1
4:boot-device:/dev/dsk/c4d0s3:2
10:route66:C:0
10:/:rpool/ROOT/route66:1
10:boot-device:/dev/dsk/c4d0s3:2
11:nv95:C:0
11:/:rpool/ROOT/nv95:1
11:boot-device:/dev/dsk/c4d0s3:2
6:s10u7-baseline:C:0
6:/:rpool/ROOT/s10u7-baseline:1
6:boot-device:/dev/dsk/c4d0s3:2
7:nv114:C:0
7:/:rpool/ROOT/nv114:1
7:boot-device:/dev/dsk/c4d0s3:2
5:test:C:0
5:/:rpool/ROOT/test:1
5:boot-device:/dev/dsk/c4d0s3:2
We can see that the boot environment named test is (still) BE #5 and has it's root file system at rpool/ROOT/test. This is the default dataset name and indicates that the boot environment has not been renamed. Consider the following example for a more complicated configuration.
# lucreate -n scooby
# lufslist scooby | grep ROOT
rpool/ROOT/scooby       zfs            241152 /                   -
rpool/ROOT              zfs       39284664832 /rpool/ROOT         -

# lurename -e scooby -n doo
# lufslist doo | grep ROOT
rpool/ROOT/scooby       zfs            241152 /                   -
rpool/ROOT              zfs       39284664832 /rpool/ROOT         -
The point is that we have to trust the contents of /etc/lutab but it does not hurt to do a bit of sanity checking before we start deleting ZFS datasets. To remove boot environment test from the view of Live Upgrade, delete the three lines in /etc/lutab starting with 5 (in this example). We should also remove it's Internal Configuration File (ICF) /etc/lu/ICF.5
# mv -f /etc/lutab /etc/lutab.old
# grep -v \^5: /etc/lutab.old > /etc/lutab
# rm -f /etc/lu/ICF.5

# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
s10u5_baseline             yes      no     no        yes    -         
s10u5_lu                   yes      no     no        yes    -         
s10u6_ufs                  yes      no     no        yes    -         
s10u6_baseline             yes      no     no        yes    -         
route66                    yes      no     no        yes    -         
nv95                       yes      yes    yes       no     -         
s10u7-baseline             yes      no     no        yes    -         
nv114                      yes      no     no        yes    -         
If the boot environment being deleted is in UFS then we are done. Well, not exactly - but pretty close. We still need to propagate the updated configuration files to the remaining boot environments. This will be done during the next live upgrade operation (lucreate, lumake, ludelete, luactivate) and I would recommend that you let Live Upgrade handle this part. The exception to this will be if you boot directly into another boot environment without activating it first. This isn't a recommended practice and has been the source of some of my most frustrating mistakes.

If the exorcised boot environment is in ZFS then we still have a little bit of work to do. We need to delete the old root datasets and any snapshots that they may have been cloned from. In our example the root dataset was rpool/ROOT/test. We need to look for any children as well as the originating snapshot, if present.
# zfs list -r rpool/ROOT/test
NAME                  USED  AVAIL  REFER  MOUNTPOINT
rpool/ROOT/test       234K  6.47G  8.79G  /.alt.test
rpool/ROOT/test/var    18K  6.47G    18K  /.alt.test/var

# zfs get -r origin rpool/ROOT/test
NAME             PROPERTY  VALUE                 SOURCE
rpool/ROOT/test  origin    rpool/ROOT/nv95@test  -
rpool/ROOT/test/var  origin    rpool/ROOT/nv95/var@test    
       
# zfs destroy rpool/ROOT/test/var
# zfs destroy rpool/ROOT/nv95/var@test
# zfs destroy rpool/ROOT/test
# zfs destroy rpool/ROOT/nv95@test
Important note:luactivate will promote the newly activated root dataset so that snapshots used to create alternate boot environments should be easy to delete. If you are switching between boot environments without activating them first (which I have already warned you about doing), you may have to manually promote a different dataset so that the snapshots can be deleted.

To BE or not to BE - how about no BE ?

You may find yourself in a situation where you have things so scrambled up that you want to start all over again. We can use what we have just learned to unwind Live Upgrade and start from a clean configuration. Specifically we want to delete /etc/lutab, the ICF and related files, all of the temporary files in /etc/lu/tmp and a few files that hold environment variables for some of the lu scripts. And if using ZFS we will also have to delete any datasets and snapshots that are no longer needed.
 
# rm -f /etc/lutab 
# rm -f /etc/lu/ICF.* /etc/lu/INODE.* /etc/lu/vtoc.*
# rm -f /etc/lu/.??*
# rm -f /etc/lu/tmp/* 

# lustatus
ERROR: No boot environments are configured on this system
ERROR: cannot determine list of all boot environment names

# lucreate -c scooby -n doo
Checking GRUB menu...
Analyzing system configuration.
No name for current boot environment.
Current boot environment is named <scooby>.
Creating initial configuration for primary boot environment <scooby>.
The device </dev/dsk/c4d0s3> is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name <scooby> PBE Boot Device </dev/dsk/c4d0s3>.
Comparing source boot environment <scooby> file systems with the file 
system(s) you specified for the new boot environment. Determining which 
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment <doo>.
Source boot environment is <scooby>.
Creating boot environment <doo>.
Cloning file systems from boot environment <scooby> to create boot environment <doo>.
Creating snapshot for <rpool/ROOT/scooby> on <rpool/ROOT/scooby@doo>.
Creating clone for <rpool/ROOT/scooby@doo> on <rpool/ROOT/doo>.
Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/doo>.
Saving existing file </boot/grub/menu.lst> in top level dataset for BE <doo> as <mount-point>//boot/grub/menu.lst.prev.
File </boot/grub/menu.lst> propagation successful
Copied GRUB menu from PBE to ABE
No entry for BE <doo> in GRUB menu
Population of boot environment <doo> successful.
Creation of boot environment <doo> successful.

# luactivate doo
System has findroot enabled GRUB
Generating boot-sign, partition and slice information for PBE 

File  deletion successful
File  deletion successful
File  deletion successful
Activation of boot environment  successful.

# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
scooby                     yes      yes    no        no     -         
doo                        yes      no     yes       no     -        
Pretty cool, eh ?

There are still a few more interesting corner cases, but we will deal with those in the one of the next articles. In the mean time, please remember to
  • Check Infodoc 206844 for Live Upgrade patch requirements
  • Keep your patching and package utilities updated
  • Use luactivate to switch between boot environments


Technocrati Tags: <script type="text/javascript"> var sc_project=1193495; var sc_invisible=1; var sc_security="a46f6831"; </script> <script type="text/javascript" src="http://www.statcounter.com/counter/counter.js"></script>

Tuesday May 19, 2009

This is why you do after run maintenance

Do you wonder why one racer's truck is always faster and handles better than yours ?

It is often the little things, like regular maintenance. Off road trucks take a tremendous amount of punishment, especially if you are prone to running into curbs or cartwheeling off a ramp. At the end of every session you should give your truck a thorough examination and replace parts like this bent hinge pin.

Why you do after run maintenance 03 - Bent hinge pin



04 - Straight hinge pin
Click image to enlargeClick image to enlarge 


It may not seem like much but this hinge pin will cause the steering block to bind in the caster block which will will impact the handling of the truck. This will be more pronounced coming out of a turn but it may even prevent your truck from keeping a straight line. If this is a four wheel drive truck you may experience a lack of power when turning. You will also be creating additional work for your steering servo which can lead to shorter run times and premature failure of the servo itself. And this is completely unnecessary. A replacement hinge pin is a $2 part and takes just a few seconds to replace it.

Before you invest in an expensive brushless power plant and high capacity energy source, do the little things.

Get your truck ready for the stresses of a more powerful motor.
  • Reduce rolling friction everywhere (large bearings on the axle carries and steering blocks)
  • Stronger lower suspension arms
  • Aluminum caster blocks to prevent bending your hinge pins in the future
  • Steel transmission and differential gears
  • Heavy duty output drives
  • Large front and rear bumpers to protect suspension parts and your new motor
  • Heavy steel steering links, and if available adjustable steel camber turnbuckles
  • Heavy duty aluminum shock towers (as the stock ones fail)
  • Get a range of pinion and spur gears (and start at the low end of the recommended gearing range and move up if the motor, esc and battery stay cool)
Once you have done these upgrades, then get yourself a Mamba Max or Novak Havoc system. Once you have run a brushless system with lipo batteries you will never go back to brushed motors. I'm even converting my 1/18 scale trucks to brushless, as time and repairs permit.

And at the end of every race or afternoon of bashing
  • Thoroughly inspect your truck for damage and replace all broken parts
  • Clean dirt away from electronics (radio, speed control, servos)
  • Make sure your suspension moves freely
  • Replace any bent screws or hinge pins
  • Tighten screws that have become loose - be careful with plastic parts as they can easily strip
  • Oil your motor bearings, and if brushed clean with motor spray and check your brushes and commutator
This will keep your truck in top running form and fun for years to come.

Technocrati Tags:

Wednesday Apr 08, 2009

VirtualBox 2.2 has been released

VirtualBox 2.2 is now available. This is a major upgrade and includes the following new features

  • OVF (Open Virtualization Format) appliance import and export (see chapter 3.8, Importing and exporting virtual machines, User Manual page 55)
  • Host-only networking mode (see chapter 6.7, Host-only networking, User Manual page 88)
  • Hypervisor optimizations with significant performance gains for high context switching rates
  • Raised the memory limit for VMs on 64-bit hosts to 16GB
  • VT-x/AMD-V are enabled by default for newly created virtual machines
  • USB (OHCI & EHCI) is enabled by default for newly created virtual machines (Qt GUI only)
  • Experimental USB support for OpenSolaris hosts
  • Shared folders for Solaris and OpenSolaris guests
  • OpenGL 3D acceleration for Linux and Solaris guests (see chapter 4.8, Hardware 3D acceleration (OpenGL), User Manual page 70)
  • Added C API in addition to C++, Java, Python and Web Services

    For a complete list of new features and bug fixes, see the Changelog at http://www.virtualbox.org/wiki/Changelog.

    VirtualBox 2.2 can be downloaded from http://www.virtualbox.org/wiki/Downloads

    Important: Remember to reinstall the guest additions for all of your existing VMs.
  • Tuesday Apr 07, 2009

    Becca's Mini-T gets a new pair of shoes

    While repairing the trucks from the weekend bashing, I replaced the tires on my daughter's Mini-T. We put on a set of 4 Pro-Line Dirt Hawgs (old style) and it now looks as tough as the RC18MT - except for the pink and purple paint scheme. And with the new dual slipper clutch and big block engine, it can keep up with the Associated mini-monster truck on just about any terrain. The heavier tires slow it down just a bit, but the increased traction makes driving it much more fun. Until you forget to back off on the steering servo rate and flip it in a turn.

    This was my first painted Lexan body and I think it turned out pretty good. My daughter chose the color scheme. The flames on the side were hand done by masking. I have some new bodies to paint as soon as the rains go away and the humidity drops a bit. The black Hummer body for the T-Maxx to match the real one that I drive should be awesome.
    Click image to enlarge 

    What do you want for Easter ?

    The other day my daughter asked me what I wanted for Easter. Other than being ecstatic that she started the conversation in the role of the giver, I had to admit that I had not even thought about it. Being the helping teenager, she suggested an iTunes gift card. This of course is her way of saying that she wanted an iTunes gift card, which I have mentally filed away for the time when my path crosses said item.

    Putting together a mythical budget of $200US (no, that's not the Easter gift budget - just a conversation starter), what would be the best addition to the garage ?

    Here is my short list, feel free to add anything I might be missing.

  • Brushless motor and ESC for the Rustler. This is an upgrade for 2 trucks as the Rustler VXL-5/Titan 12T combo will be reused in the Stampede to replace the stock setup.
  • Traxxas Slash 1/10 short course race truck (waterproof).
  • Associated RC18 Factory Team aluminum billet set (serious bling for my daughter's truck) plus a 17T mod motor. Oooof - too much power ?
  • Equivalent aluminum parts for her Mini-T
  • New LiPo capable charger
  • Nothing now, put it in the bank towards a Traxxas Summit. Father's Day is coming soon !

    The new Kyosho DRT short course nitro truck is just a bit out of the budget, but looks like quite a bargain. I haven't read a review of the engine performance, but I have yet to see a bad product come out of Kyosho. The oversized fuel tank would be a big benefit for weekend bashing.

    Which of these would be your choice ? Or do you have a better idea ?
  • Sunday Apr 05, 2009

    A Good Weekend of Bashing

    With the temperatures climbing into the mid 80s and plenty of sunshine, it was a great weekend to get out some of the RC trucks and see what they could do. The final tally for the weekend was

  • A broken rear shock on the Rustler - and oil everywhere!
  • A broken front shock spring retaining clip on the RC18MT
  • A broken servo brace on the Mini-T
  • Finished all but the cosmetics for the Hornet (see left)
  • And not a (new) scratch on the Stampede. It just keeps going, and going.....

    A couple of dollars in parts and maybe an hour of wrenching on the trucks and they will all be as good as new. Except for the Rustler who might be getting a brushless motor upgrade. I'm trying to decide between a Mamba Max ESC with either the 4600kV (conservative) or 5700kV (wild) or the Traxxas VXL-3 Velineon (simple). Of course this will set in motion other things like a wheelie bar, new diff, LiPo batteries and charger, maybe some aluminum hub carriers.

    And of course when all of this is done, the Stampede gets upgraded with the old parts from the Rustler. We'll see how it holds up with a new ESC and the 12T motor. I'm seeing a lot of transmission work here too. But it will all be fun.
  • Click image to enlarge 

    Friday Mar 27, 2009

    VirtualBox 2.2 Beta 2 now available for testing


    Virtualbox 2.2 Beta 2 is now available for testing. In addition to the feature list from the Beta 1 announcement, Beta 2 contains the following fixes.

  • Raised the memory limit for VMs on 64-bit hosts to 16GB
  • Many fixes for OVF import/export
  • VMM: properly emulate RDMSR from the TSC MSR, should fix some NetBSD guests
  • IDE: fixed hard disk upgrade from XML-1.2 settings (bug #1518)
  • Hard disks: refuse to start the VM if a disk image is not writable
  • USB: Fixed BSOD on the host with certain USB devices (Windows hosts only; bug #1654)
  • E1000: properly handle cable disconnects (bug #3421)
  • X11 guests: prevented setting the locale in vboxmouse, as this caused problems with Turkish locales (bug #3563)
  • Linux additions: fixed typo when detecting Xorg 1.6 (bug #3555)
  • Windows guests: bind the VBoxMouse.sys filter driver to the correct guest pointing device (bug #1324)
  • Windows hosts: fixed BSOD when starting a VM with enabled host interface (bug #3414)
  • Linux hosts: do not leave zombies of VBoxSysInfo.sh (bug #3586)
  • VBoxManage: controlvm dvdattach did not work if the image was attached before
  • VBoxManage showvminfo: don't spam the release log if the additions don't support statistics information (bug #3457)
  • GUI: fail with an appropriate error message when trying to boot a read-only disk image (bug #1745)
  • LsiLogic: fixed problems with Solaris guests

    See the announcement in the VirtualBox Forum for more information. The beta can be downloaded from http://download.virtualbox.org/virtualbox/2.2.0_BETA2.

    As with Beta 1, this release is for testing and evaluation purposes, so please do not run this on a production system. I really don't need to remind you of this - you will get a nice graphical reminder every time you start the control program.

    Point your browser at the VirtualBox Beta Feedback Forum for more information about the beta program.

    Technocrati Tags:
  • Tuesday Mar 24, 2009

    Nice OpenSolaris 2008.11 training materials from CZOSUG event

    I don't normally just post about something that someone else did. That is what RSS aggregators and search engines are for. Occasionally something comes across an email discussion list that you just have to pass along. This is one of those times.

    Roman Strobl, Martin Man and and Lubos Kocman (leaders of the Czech OpenSolaris User Group) put together a very nice OpenSolaris training day and have posted all of the materials from the event. This is an excellent OpenSolaris overview and tutorial - nicely paced and a good amount of content.

    Thanks to Roman, Lubos and Martin for making this available.

    Adobe releases an x86 version of Acroread 9.1 for Solaris

    Great Googly Moogly!!! Our friends at Adobe have finally released a new x86 version of Acroread for Solaris. Download Acroread 9.1 from Adobe.com and say goodbye to evince, xpdf, and the especially interesting Acroread out of the Linux branded zone trick.
    Click image to enlarge 

    Monday Mar 23, 2009

    VirtualBox 2.2 Beta 1 now available for testing


    Virtualbox 2.2 Beta 1 is now available for testing. It is a major release and contains many new features, including

  • OVF (Open Virtualization Format) appliance import and export
  • Host-only networking mode
  • Hypervisor optimizations with significant performance gains for high context switching rates
  • VT-x/AMD-V are enabled by default for newly created virtual machines
  • USB (OHCI & EHCI) is enabled by default for newly created virtual machines (Qt GUI only)
  • Experimental USB support for OpenSolaris hosts
  • Shared folders for Solaris and OpenSolaris guests
  • OpenGL 3d acceleration for Linux guests
  • Experimental support for OS X 10.6 (Snow Leopard) hosts running both the 64-bit and the 32-bit kernel

    as well as numerous bug fixes. See the announcement in the VirtualBox Forum for more information. The beta can be downloaded from http://download.virtualbox.org/virtualbox/2.2.0_BETA1.

    This release is for testing and evaluation purposes, so please do not run this on a production system. I really don't need to remind you of this - you will get a nice graphical reminder every time you start the control program.

    Since there are some configuration file changes in the new release, the beta program will also upgrade your configurations. Please back up your XML files before running it for the first time (or if you are like me, a zfs snapshot -r to your xvm datasets will do the trick).

    Point your browser at the VirtualBox Beta Feedback Forum for more information about the beta program.

    Technocrati Tags:
  • Sunday Mar 22, 2009

    Dr. Live Upgrade - Or How I Learned to Stop Worrying and Love Solaris Patching

    Who loves to patch or upgrade a system ?

    That's right, nobody. Or if you do perhaps we should start a local support group to help you come to terms with this unusual fascination. Patching, and to a lesser extent upgrades (which can be thought of as patches delivered more efficiently through package replacement), is the the most common complaint that I hear when meeting with system administrators and their management.

    Most of the difficulties seem to fit into one of the following categories.
    • Analysis: What patches need to be applied to my system ?
    • Effort: What do I have to do to perform the required maintenance ?
    • Outage: How long will the system be down to perform the maintenance ?
    • Recovery: What happens when something goes wrong ?
    And if a single system gives you a headache, adding a few containers into the mix will bring on a full migraine. And without some relief you may be left with the impression that containers aren't worth the effort. That's unfortunate because containers don't have to be troublesome and patching doesn't have to be hard. But it does take getting to know one of the most important and sadly least used features in Solaris: Live Upgrade

    Before we looking at Live Upgrade, let's start with a definition. A boot environment is the set of all file systems and devices that are unique to an instance of Solaris on a system. If you have several boot environments then some data will be shared (non svr4 package installed applications, data, local home directories) and some will be exclusive to one boot environment. Not making this more complicated than it needs to be, a boot environment is generally your root (including /usr and /etc), /var (frequently split out on a separate file system), and /opt. Swap may or may not be a part of a boot environment - it is your choice. I prefer to share swap, but there are some operational situations where this may not be feasible. There may be additional items, but generally everything else is shared. Network mounted file systems and removable media are assumed to be shared.

    With this definition behind us, let's proceed.

    Analysis: What patches need to be applied to my system ?

    For all of the assistance that Live Upgrade offers, it doesn't do anything to help with the analysis phase. Fortunately there are plenty of tools that can help with this phase. Some of them work nicely with Live Upgrade, others take a bit more effort.

    smpatch(1M) has an analyze capability that can determine which patches need to be applied to your system. It will get a list of patches from an update server, most likely one at Sun, and match up the dependencies and requirements with your system. smpatch can be used to download these patches for future application or it can apply them for you. smpatch works nicely with Live Upgrade, so from a single command you can upgrade an alternate boot environment. With containers!

    The Sun Update Manager is a simple to use graphical front end for smpatch. It gives you a little more flexibility during the inspection phase by allowing you to look at individual patch README files. It is also much easier to see what collection a patch belongs to (recommended, security, none) and if the application of that patch will require a reboot. For all of that additional flexibility you lose the integration with Live Upgrade. Not for lack of trying, but I have not found a good way to make Update Manager and Live Upgrade play together.

    Sun xVM Ops Center has a much more sophisticated patch analysis system that uses additional knowledge engines beyond those used by smpatch and Update Manager. The result is a higher quality patch bundle tailored for each individual system, automated deployment of the patch bundle, detailed auditing of what was done and simple backout should problems occur. And it basically does the same for Windows and Linux. It is this last feature that makes things interesting. Neither Windows nor Linux have anything like Live Upgrade and the least common denominator approach of Ops Center in its current state means that it doesn't work with Live Upgrade. Fortunately this will change in the not too distant future, and when it does I will be shouting about this feature from rooftops (OK, what I really mean is I'll post a blog and a tweet about it). If I can coax Ops Center into doing the analysis and download pieces then I can manually bolt it onto Live Upgrade for a best of both worlds solution.

    These are our offerings and there are others. Some of them are quite good and in use in many places. Patch Check Advanced (PCA) is one of the more common tools in use. It operates on a patch dependency cross reference file and does a good job with the dependency analysis (this is obsoleted by that, etc). It can be used to maintain an alternate boot environment and in simple cases that would be fine. If the alternate boot environment contains any containers then I would use Live Upgrade's luupgrade instead of PCA's patchadd -R approach. If I was familiar with PCA then I would still use it for the analysis and download feature. Just let luupgrade apply the patches. You might have to uncompress the patches downloaded by PCA before handing them over to luupgrade, but that is a minor implementation detail.

    In summary, use an analysis tool appropriate to the task (based on familiarity, budget and complexity) to figure out what patches are needed. Then use Live Upgrade (luupgrade) to deploy the desired patches.

    Effort: What does it take to perform the required maintenance ?

    This is a big topic and I could write pages on the subject. Even if I use an analysis tool like smpatch or pca to save me hours of trolling through READMEs drawing dependency graphs, there is still a lot of work to do in order to survive the ordeal of applying patches. Some of the more common techniques include ....
    Backing up your boot environment.
    I should not have to mention this, but there are some operational considerations unique to system maintenance. Even though tiny, there is a greater chance that you will render your system non-bootable during system maintenance than any other operational task. Even with mature processes, human factors can come into play and bad things can happen (oops - that was my fallback boot environment that I just ran newfs(1M) on).

    This is why automation and time tested scripting becomes so important. Should you do the unthinkable and render a system nonfunctional, rapid restoration of the boot environment is important. And getting it back to the last known good state is just as important. A fresh backup that can be restored by utilities from install media or jumpstart miniroot is a very good idea. Flash archives (see flarcreate(1M)) is even better, although complications with containers make this less interesting now than in previous releases of Solaris. How many of you take a backup before applying patches ? Probably about the same number as replace batteries in your RAID controllers or change out your UPS systems after their expiration date.

    Split Mirrors
    One interesting technique is to split mirrors instead of backups. Of course this only works if you mirror your boot environment (a recommended practice for those systems with adequate disk space). Break your mirror, apply patches to the non-running half, cut over the updated boot environment during the next maintenance window and see how this goes. At first glance this seems like a good idea, but there are two catches.
    1. Do you synchronize dynamic boot environment elements ? Things like /etc/passwd, /etc/shadow, /var/adm/messages, print and mail queues are constantly changing. It is possible that these have changed between the mirror split and subsequent activation.
    2. How long are you willing to run without your boot environment being mirrored ? This may cause to you certify the new boot environment too quickly. You want to reestablish your mirror, but if that is your fallback in case of trouble you have a conundrum. And if you are the sort that seems to have a black cloud following you through life, you will discover a problem shortly after you started the mirror resync.
    Pez disks ?
    OK, the mirror split thing can be solved by swinging in another disk. Operationally a bit more complex and you have at least one disk that you can't use for other purposes (like hosting a few containers), but it can be done. I wouldn't do it (mainly because I know where this story is heading) but many of you do.
    Better living through Live Upgrade
    Everything we do to try to make it better adds complexity, or another hundred lines of scripting. It doesn't need to be this way, and if you become one with the LU commands it won't for you either. Live Upgrade will take care building and updating multiple boot environments. It will check to make sure the disks being used are bootable and not part of another boot environment. It works with the Solaris Volume Manager, Veritas encapulated root devices, and starting with Solaris 10 10/08 (update 6) ZFS. It also takes care of the synchronization problem. Starting with Solaris 10 8/07 (update 4), Live Upgrade also works with containers, both native and branded (and with Solaris 10 10/08 your zoneroots can be in a ZFS pool).

    Outage: How long will my system be down for the maintenance?

    Or perhaps more to the point, how long will my applications be unavailable ? The proper reply is it depends on how big the patch bundle is and how many containers you have. And if a kernel patch is involved, double or triple your estimate. This can be a big problem and cause you to take short cuts like only install some patches now and others later when it is more convenient. Our good friend Bart Smaalders has a nice discussion on the implications of this approach and what we are doing in OpenSolaris to solve this. That solution will eventually work its way into the Next Solaris, but in the mean time we have a problem to solve.

    There is a large set (not really large, but more than one) of patches that require a quiescent system to be properly applied. An example would be a kernel patch that causes a change to libc. It is sort of hard to rip out libc on a running system (new processes get the new libc my may have issues with the running kernel, old processes get the old libc and tend to be fine, until they do a fork(2) and exec(2)). So we developed a brilliant solution to this problem - deferred activation patching. If you apply one of these troublesome patches then we will throw it in a queue to be applied the next time the system is quiesced (a fancy term for the next time we're in single user mode). This solves the current system stability concerns but may make the next reboot take a bit longer. And if you forgot you have deferred patches in your queue, don't get anxious and interrupt the shutdown or next boot. Grab a noncaffeinated beverage and put some Bobby McFerrin on your iPod. Don't Worry, Be Happy.

    So deferred activation patching seems like a good way to deal with situation where everything goes well. And some brilliant engineers are working on applying patches in parallel (where applicable) which will make this even better. But what happens when things go wrong ? This is when you realize that patchrm(1M) is not your friend. It has never been your friend, nor will it ever be. I have an almost paralyzing fear of dentists, but would rather visit one then start down a path where patchrm is involved. Well tested tools and some automation can reduce this to simple anxiety, but if I could eliminate patchrm altogether I would be much happier.

    For all that Live Upgrade can do to ease system maintenance, it is in the area of outage and recovery that make it special. And when speaking about Solaris, either in training or evangelism events, this is why I urge attendees to drop whatever they are doing and adopt Live Upgrade immediately.

    Since Live Upgrade (lucreate, lumake, luupgrade) operates on an alternate boot environment, the currently running set of applications are not affected. The system stays up, applications stay running and nothing is changing underneath them so there is no cause for concern. The only impact is some additional load by the live upgrade operations. If that is a concern then run live upgrade in a project and cap resource consumption to that project.

    An interesting implication of Live Upgrade is that the operational sanity of each step is no longer required. All that matters is the end state. This gives us more freedom to apply patches in a more efficient fashion than would be possible on a running boot environment. This is especially noticeable on a system with containers. The time that the upgrade runs is significantly reduced, and all the while applications are running. No more deferred activation patches, no more single user mode patching. And if all goes poorly after activating the new boot environment you still have your old one to fall back on. Queue Bobby McFerrin for another round of "Don't Worry, Be Happy".

    This brings up another feature of Live Upgrade - the synchronization of system files in flight between boot environments. After a boot environment is activated, a synchronization process is queued as a K0 script to be run during shutdown. Live Upgrade will catch a lot of private files that we know about and the obvious public ones (/etc/passwd, /etc/shadow, /var/adm/messages, mail queues). It also provides a place (/etc/lu/synclist) for you to include things we might not have thought about or are unique to your applications.

    When using Live Upgrade applications are only unavailable for the amount of time it takes to shut down the system (the synchronization process) and boot the new boot environment. This may include some minor SMF manifest importing but that should not add much to the new boot time. You only have to complete the restart during a maintenance window, not the entire upgrade. While vampires are all the rage for teenagers these days, system administrators can now come out into the light and work regular hours.

    Recovery: What happens when something goes wrong?

    This is when you will fully appreciate Live Upgrade. After activation of a new boot environment, now called the Primary Boot Environment (PBE), your old boot environment, now called an Alternate Boot Environment (ABE) can still be called upon in case of trouble. Just activate it and shut down the system. Applications will be down for a short period (the K0 sync and subsequence start up), but there will be no more wringing of the hands, reaching for beverages with too much caffeine and vitamin B12, trying to remember where you kept your bottle of Tums. Queue Bobby McFerrin one more timne and "Don't Worry, Be Happy". You will be back to your previous operational state in a matter of a few minutes (longer if you have a large server with many disks). Then you can mount up your ABE and troll through the logs trying to determine what went wrong. If you have a service contract then we will troll through the logs with you.

    I neglected to mention earlier, disks that comprise boot environments can be mirrored, so there is no rush to certification. Everything can be mirrored, at all times. Which is a very good thing. You still need to back up your boot environments, but you will find yourself reaching for the backup media much less often when using Live Upgrade.

    All that is left are a few simple examples of how to use Live Upgrade. I'll save that for next time.

    Technocrati Tags:

    Tuesday Mar 17, 2009

    Time-slider saves the day (or at least a lot of frustration)

    As I was tidying up my Live Upgrade boot environments yesterday, I did something that I thought was terribly clever but had some pretty wicked side effects. While linking up all of my application configuration directories (firefox, mozilla, thunderbird, [g]xine, staroffice) I got blindsided by the GNOME message client: pidgin, or more specifically one of our migration assistants from GAIM to pidgin.

    As a quick background, Solaris, Solaris Express Community Edition (SXCE), and OpenSolaris all have different versions of the GNOME desktop. Since some of the configuration settings are incompatible across releases the easy solution is to keep separate home directories for each version of GNOME you might use. Which is fine until you grow weary of setting your message filters for Thunderbird again or forget which Firefox has that cached password for the local recreation center that you only use once a year. Pretty quickly you come up with the idea of a common directory for all shared configuration files (dot directories, collections of pictures, video, audio, presentations, scripts).

    For one boot environment you do something like
    $ mkdir /export/home/me
    $ for dotdir in .thunderbird .purple .mozilla .firefox .gxine .xine .staroffice .wine .staroffice\* .openoffice\* .VirtualBox .evolution bin lib misc presentations 
    > do
    > mv $dotdir /export/home/me
    > ln -s /export/home/me/$dotdir   $dotdir
    > done
    
    And for the other GNOME home directories you do something like
    $ for dotdir in .thunderbird .purple .mozilla .firefox .gxine .xine .staroffice .wine .staroffice\* .openoffice\* .VirtualBox .evolution bin lib misc presentations 
    > do
    > mv $dotdir ${dotdir}.old
    > ln -s /export/home/me/$dotdir   $dotdir
    > done
    
    And all is well. Until......

    Booted into Solaris 10 and fired up pidgin thinking I would get all of my accounts activated and the default chatrooms started. Instead I was met by this rather nasty note that I had incompatible GAIM entries and it would try to convert them for me. What it did was wipe out all of my pidgin settings. And sure enough when I look into the shared directory, .purple contained all new and quite empty configuration settings.

    This is where I am hoping to get some sympathy, since we have all done things like this. But then I remembered I had started time-slider earlier in the day (from the OpenSolaris side of things).
    $ time-slider-setup
    
    And there were my .purple files from 15 minutes ago, right before the GAIM conversion tools made a mess of them.
    $ cd /export/home/.zfs/snapshot
    $ ls
    zfs-auto-snap:daily-2009-03-16-22:47
    zfs-auto-snap:daily-2009-03-17-00:00
    zfs-auto-snap:frequent-2009-03-17-11:45
    zfs-auto-snap:frequent-2009-03-17-12:00
    zfs-auto-snap:frequent-2009-03-17-12:15
    zfs-auto-snap:frequent-2009-03-17-12:30
    zfs-auto-snap:hourly-2009-03-16-22:47
    zfs-auto-snap:hourly-2009-03-16-23:00
    zfs-auto-snap:hourly-2009-03-17-00:00
    zfs-auto-snap:hourly-2009-03-17-01:00
    zfs-auto-snap:hourly-2009-03-17-02:00
    zfs-auto-snap:hourly-2009-03-17-03:00
    zfs-auto-snap:hourly-2009-03-17-04:00
    zfs-auto-snap:hourly-2009-03-17-05:00
    zfs-auto-snap:hourly-2009-03-17-06:00
    zfs-auto-snap:hourly-2009-03-17-07:00
    zfs-auto-snap:hourly-2009-03-17-08:00
    zfs-auto-snap:hourly-2009-03-17-09:00
    zfs-auto-snap:hourly-2009-03-17-10:00
    zfs-auto-snap:hourly-2009-03-17-11:00
    zfs-auto-snap:hourly-2009-03-17-12:00
    zfs-auto-snap:monthly-2009-03-16-11:38
    zfs-auto-snap:weekly-2009-03-16-22:47
    
    $ cd zfs-auto-snap:frequent-2009-03-17-12:15/me/.purple
    $ rm -rf /export/home/me/.purple/\*
    $ cp -r \* /export/home/me/.purple
    
    (and this is is really really important)
    $ mv $HOME/.gaim $HOME/.gaim-never-to-be-heard-from-again
    
    
    Log out and back in to refresh the GNOME configuration settings and everything is as it should be. OpenSolaris time-slider is just one more reason that I'm glad that it is my daily driver.

    Technocrati Tags:
    About

    Bob Netherton is a Principal Sales Consultant for the North American Commercial Hardware group, specializing in Solaris, Virtualization and Engineered Systems. Bob is also a contributing author of Solaris 10 Virtualization Essentials.

    This blog will contain information about all three, but primarily focused on topics for Solaris system administrators.

    Please follow me on Twitter Facebook or send me email

    Search

    Archives
    « April 2014
    SunMonTueWedThuFriSat
      
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
       
           
    Today