Tuesday Mar 17, 2009

Time-slider saves the day (or at least a lot of frustration)

As I was tidying up my Live Upgrade boot environments yesterday, I did something that I thought was terribly clever but had some pretty wicked side effects. While linking up all of my application configuration directories (firefox, mozilla, thunderbird, [g]xine, staroffice) I got blindsided by the GNOME message client: pidgin, or more specifically one of our migration assistants from GAIM to pidgin.

As a quick background, Solaris, Solaris Express Community Edition (SXCE), and OpenSolaris all have different versions of the GNOME desktop. Since some of the configuration settings are incompatible across releases the easy solution is to keep separate home directories for each version of GNOME you might use. Which is fine until you grow weary of setting your message filters for Thunderbird again or forget which Firefox has that cached password for the local recreation center that you only use once a year. Pretty quickly you come up with the idea of a common directory for all shared configuration files (dot directories, collections of pictures, video, audio, presentations, scripts).

For one boot environment you do something like
$ mkdir /export/home/me
$ for dotdir in .thunderbird .purple .mozilla .firefox .gxine .xine .staroffice .wine .staroffice\* .openoffice\* .VirtualBox .evolution bin lib misc presentations 
> do
> mv $dotdir /export/home/me
> ln -s /export/home/me/$dotdir   $dotdir
> done
And for the other GNOME home directories you do something like
$ for dotdir in .thunderbird .purple .mozilla .firefox .gxine .xine .staroffice .wine .staroffice\* .openoffice\* .VirtualBox .evolution bin lib misc presentations 
> do
> mv $dotdir ${dotdir}.old
> ln -s /export/home/me/$dotdir   $dotdir
> done
And all is well. Until......

Booted into Solaris 10 and fired up pidgin thinking I would get all of my accounts activated and the default chatrooms started. Instead I was met by this rather nasty note that I had incompatible GAIM entries and it would try to convert them for me. What it did was wipe out all of my pidgin settings. And sure enough when I look into the shared directory, .purple contained all new and quite empty configuration settings.

This is where I am hoping to get some sympathy, since we have all done things like this. But then I remembered I had started time-slider earlier in the day (from the OpenSolaris side of things).
$ time-slider-setup
And there were my .purple files from 15 minutes ago, right before the GAIM conversion tools made a mess of them.
$ cd /export/home/.zfs/snapshot
$ ls
zfs-auto-snap:daily-2009-03-16-22:47
zfs-auto-snap:daily-2009-03-17-00:00
zfs-auto-snap:frequent-2009-03-17-11:45
zfs-auto-snap:frequent-2009-03-17-12:00
zfs-auto-snap:frequent-2009-03-17-12:15
zfs-auto-snap:frequent-2009-03-17-12:30
zfs-auto-snap:hourly-2009-03-16-22:47
zfs-auto-snap:hourly-2009-03-16-23:00
zfs-auto-snap:hourly-2009-03-17-00:00
zfs-auto-snap:hourly-2009-03-17-01:00
zfs-auto-snap:hourly-2009-03-17-02:00
zfs-auto-snap:hourly-2009-03-17-03:00
zfs-auto-snap:hourly-2009-03-17-04:00
zfs-auto-snap:hourly-2009-03-17-05:00
zfs-auto-snap:hourly-2009-03-17-06:00
zfs-auto-snap:hourly-2009-03-17-07:00
zfs-auto-snap:hourly-2009-03-17-08:00
zfs-auto-snap:hourly-2009-03-17-09:00
zfs-auto-snap:hourly-2009-03-17-10:00
zfs-auto-snap:hourly-2009-03-17-11:00
zfs-auto-snap:hourly-2009-03-17-12:00
zfs-auto-snap:monthly-2009-03-16-11:38
zfs-auto-snap:weekly-2009-03-16-22:47

$ cd zfs-auto-snap:frequent-2009-03-17-12:15/me/.purple
$ rm -rf /export/home/me/.purple/\*
$ cp -r \* /export/home/me/.purple

(and this is is really really important)
$ mv $HOME/.gaim $HOME/.gaim-never-to-be-heard-from-again

Log out and back in to refresh the GNOME configuration settings and everything is as it should be. OpenSolaris time-slider is just one more reason that I'm glad that it is my daily driver.

Technocrati Tags:

Monday Mar 05, 2007

Live Upgrade on a Laptop ?

It is fascinating that one of the most interesting features in Solaris isn't new - in fact it's been around for quite some time. I'm referring to Live Upgrade which allows the installation or upgrade of an alternate boot environment while you are doing productive work in another. And why would I want to do this, especially on a laptop ?

I can think of several reasons.

Live Upgrade can be completely driven from ISO disk images of installation media. This means that I don't have to burn DVDs or CDs, or even better - no more late night runs to CompUSA or Fry's before the big presentation in the morning. And I won't miss the sound of the DVD reader grinding itself to death every time I install Solaris.

It simplies running multiple versions of Solaris, significantly. One of the most frequent questions we encounter in Installfests is which OS to run ? Solaris 10, OpenSolaris Community Edition, Solaris Developer Express, the nightly OpenSolaris BFU. And the answer is all of them. OK, maybe not for everyone - but even if you are going to choose one OS, upgrading to the newest update or applying a set of patches safely, while keeping the older environment as a fallback is a good thing, especially if you are on the road.

And best of all, Live Upgrade is quite simple as long as you follow the rules - and fortunately there are only a few.

The plan

As with most things, the critical first step is the plan - and the earlier the better. For Live Upgrade this means the disk layout prior to installation. You will have to reserve some space for your alternate boot environment(s) when you initially install Solaris. While not strictly true, the mobile laptop user with a single disk drive will find the task of relocating data and repartitioning your disk after the fact rather complicated. In other words, you are better off planning in advance.

Here is an example of a multi-boot partition table that has been designed for Live Upgrade.

So how many boot environments do you really need ?

I use three - one for my customer facing Solaris 10 environment, one for my Open Solaris Community Edition daily driver, and one for whatever super-cool-gotta-have OpenSolaris project that is only available via BFU (today that is Xen, but it has been other things in the past and will likely be so in the future). And we all know that BFU means Better Forget Upgrades - but I have found that Live Upgrade is a simple way of preparing for a BFU. This third boot environment is also used as a backup during major Solaris updates (yeah, I would really like a 4th and 5th, but at some point you have to stop). Three is good.

How big do these need to be ?

That depends on how much stuff you carry around - specifically how much /opt will grow. There are some techniques you can use to reduce the storage requirements (sharing /opt directories across boot environments), but these tricks make other things more complicated (like zone creation). 6GB makes a good root file system, perhaps a bit larger if you start installing packages from Blastwave. Solaris Developer Express with the Studio development tools and Netbeans environment slightly larger - 8GB to 10GB.

Familiarization and Reading the Furnished Materials

The next step is to familiarize yourself with the process. There is a wealth of information to be found in the Solaris Documentation on the web, specifically Solaris Live Upgrade and Upgrade Planning in the Release and Installation Collection. Even more important is Infodoc 72099 which lists all of the patches required for Live Upgrade. An excellent community reference is Derek Crudgington's Live Upgrade for Idiots.

At first glance you may find the documentation a bit intimidating. Perhaps this is why more people aren't adopting Live Upgrade in their environment. Live Upgrade can do all sorts of spectacular things such as installing from a flasharchive, splitting and joining file systems, breaking Solaris Volume Manager mirrors - none of which apply to our current situation. In fact when you strip Live Upgrade down to the few parts that we will be using, it is amazingly simple. Here is an outline of the steps you will perform (after thoroughly reading the documentation - I have to say that).
  1. Check Infodoc 72099 to make sure all of the prerequisite patches are installed (and remember to do this each time as Live Upgrade is an evolving product).
  2. Create the new boot environment if it doesn't already exist. You can name your current boot environment the very first time you do this.
  3. Download the DVD ISO image for the new Solaris release or update
  4. Mount the ISO disk image
  5. Install the packages SUNWluu, SUNWlur (and if present, SUNWlucfg) from the installation media (in Solaris_nn/Product) in your current boot environment. This is a critical step - incorrect Live Upgrade software is the most common cause of failure.
  6. Run luupgrade -u to upgrade your new boot environment
  7. Check the upgrade logs in /var/sadm/system/logs in the new boot environment. The boot environment is mounted as /a during the upgrade, or you can use lumount after the upgrade is complete.
  8. Activate the new boot environment using luactivate
  9. init 0 (also critical as K0 RC scripts are installed to perform important steps like updating your bootloader configuration, syncing system files, and building a good boot archive.
It's really that simple. Let's see this all in action.

Preparing for Live Upgrade

I'm starting with a Solaris 10 6/06 system, freshly installed with my root file system (aka the primary boot environment) in c0d0s3. From my disk partition table you will see that I have two other boot environments available: c0d0s0 and c0d0s4. I plan on putting Solaris 10 11/06 in c0d0s0 and the latest OpenSolaris Community Edition (aka Nevada) in c0d0s4.

Since I've already read the documentation a few hundred times, I can proceed to patch analysis.

After comparing the patch list in Infodoc 72099 with what I have in Solaris 10 6/06 (showrev -p), I see that I am behind on four patches: 119255, 118855, 112653, and 121003. I fire up our new best friend updatemanager and see that I also need 121119 and 119255. These two will point updatemanager at the new repository.

So I apply 121110-08 and 119255-27, and restart updatemanager. This lets me find the rest of the patches that I will need.

I then apply 119255-32, 118855-19, 112653-04 and 121003-03. None of these required a reboot, so we can proceed immediately to the next step.

Creating the new boot environment

Since I will be upgrading from media and not a flash archive, I have to create a complete new boot environment, based on my current installation. This step will essentially clone the current system. As a side note, there are lots of fascinating uses for this that we will explore later.

Since this is the first time I am using lucreate, I also get to name my current boot environment. Future invocations of lucreate will be done without the -c argument.

# lucreate -c sol10u2 -n sol10u3 -m /:c0d0s0:ufs
This magic sequence told Live Upgrade to name the current boot environment sol10u2 and to create a new one called sol10u3 with the new UFS root file system on /dev/dsk/c0d0s0.

One more time, with output.
# lucreate -c sol10u2 -n sol10u3  -m /:/dev/dsk/c0d0s0:ufs
Discovering physical storage devices
Discovering logical storage devices
Cross referencing storage devices with boot environment configurations
Determining types of file systems supported
Validating file system requests
Preparing logical storage devices
Preparing physical storage devices
Configuring physical storage devices
Configuring logical storage devices
Analyzing system configuration.
Comparing source boot environment  file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Searching /dev for possible boot environment filesystem devices

Updating system configuration files.
The device  is not a root device for any boot environment.
Creating configuration for boot environment .
Source boot environment is .
Creating boot environment .
Checking for GRUB menu on boot environment .
Creating file systems on boot environment .
Creating  file system for  on .
Mounting file systems for boot environment .
Calculating required sizes of file systems for boot environment .
Populating file systems on boot environment .
Checking selection integrity.
Integrity check OK.
Populating contents of mount point .
Copying.
At this point all of the system files from the current root file system are being copied to the new boot environment. On this laptop and with a rather complete Solaris install, it takes about 45 minutes.

Upgrading Live Upgrade

If you have not already done so, download and unpack the DVD ISO image for Solaris 10 11/06 (or whatever media you wish to use).
# lofiadm -a /export/iso/sol10u3/solarisdvd.iso
/dev/lofi/i
# mount -o ro -F hsfs /dev/lofi/1 /mnt
# pkgadd -d /mnt/Solaris_10/Product SUNWluu SUNWlur        (Remember to install SUNWlucfg if present)
And answer Y when asked if you wish to overwrite existing files.

Perform a Live Upgrade

We will now run luupgrade(1m) to upgrade the new boot environment. Since Live Upgrade doesn't support non-global zones you should delete all installed zones in the target boot environment. This restriction is relaxed in the latest OpenSolaris snapshots. Since I'm starting with a clean Solaris 10 6/06, there are no non-global zones, so I can proceed.
# luupgrade -u -n sol10u3 -s /mnt
This magic sequence says to run upgrade from the media source /mnt (mounted in an earlier step) against the boot environment sol10u3. The process will go something like
  1. Extract the miniroot from the source media
  2. Mount the new boot environment as /a
  3. Remove all of the packages from the new boot environment
  4. Install new packages from the source media
  5. Install a new boot archive
  6. Unmount the new boot environment
On this laptop the process takes about 2 hours. During the upgrade you can watch the process by doing something like
# tail -f /a/var/sadm/system/logs/upgrade_log
When the upgrade is complete, check the logs for any failed package installations. Some are harmless, but all should at least be investigated. Also look at /var/sadm/system/data/upgrade_cleanup for a list of files that had conflicts, and what Live Upgrade did to resolve them. Most of these are harmless, but you should check to see if anything looks out of place.
# more `lumount sol10u3`/var/sadm/system/logs/upgrade_log
# luumount sol10u3
# more `lumount sol10u3`/var/sadm/system/data/upgrade_cleanup
# luumount sol10u3

Activating the new boot environment

The last step is to activate the new boot environment. This is all done by a K0 script, so after activation it is imporant that you use shutdown or init 0. Do not use reboot as this bypasses the normal shutdown process and the K0 scripts will not run. This is also an FAQ!
# luactivate sol10u3
# init 0
At this point some serious magic will be invoked. The K0 script will update your bootloader configuration (in this case /boot/grub/menu.lst on the boot partition), a new boot archive will be built (just in case), and commonly modified system files will be synced. This last step is driven by /etc/lu/synclist.

Extremely important GNOME warning: GNOME configurations are not generally compatible across releases. If you are switching between Solaris 10 and Nevada, OpenSolaris, or Solaris Developer Express you are advised to maintain separate home directories so that your GNOME configurations (.gnome, .gnome2, .gconf) don't collide. If you choose different Solaris users then you are done. If you are just changing your home directory in /etc/passwd then Live Upgrade will gladly send your old directory setting to your new boot environment - and that's not at all good. Either remove /etc/passwd from the file synclist or remember to log in via command line or failsafe session once you've activated and booted the new boot environment and manually change /etc/passwd.

And that's really about it. Recovery is rather simple. After activation you will have all of your boot environments available for selection. Skipping the activation step isn't recommended as it bypasses various sanity checks and the file sync process, but when recovering a broken boot environment I think I can deal with things like old passwords.

Practice makes perfect

One more time, this time putting a nevada build 57 in c0d0s4. The only change here is that nevada adds a new LU package: SUNWlucfg. The magic sequence looks something like
# lucreate -n nevada57 -m /:c0d0s4:ufs
# mount -o ro -F hsfs `lofiadm -a /export/iso/nevada57/solarisdvd.iso` /mnt
# pkgadd -d /mnt/Solaris_11/Product SUNWluu SUNWlur SUNWlucfg
# luupgrade -u -n nevada57 -s /mnt
# luactivate nevada57


Upgrading existing Boot Environments

Once you have used all of your boot evironments your next upgrade will require you to either delete one of them and start again or upgrade an existing boot environment. The differences are minor (ludelete and lucreate versus lurename and lumake), but there are a couple of things have to consider - if you don't then this will come back to haunt you wnen you least expect it.

Since there is no livedowngrade, you can only upgrade from an older version to a newer version. The implication is that you can't use Live Upgrade to install Solaris 10 from a Solaris 11 based system (Nevada, Solaris Express Community Edition, Solaris Express Developer Edition). That's not exactly correct, you can install a Solaris 10 from a flasharchive, but that's an installation, not an upgrade.

Now you know why I have three boot environments. I leave an old Solaris 10 around as a Live Upgrade source and then alternate the other two between whatever is the most interesting - today that is Solaris 10 11/06 as the legacy and Solaris 10 8/07 and Nevada in the ping pong set. Now that Solaris 10 8/07 has been released, it will become the legacy Solaris environment and I will go back to alternating between Nevada builds with and without xVM.

Another consideration, and common source of problems, is that Live Upgrade expects that the target enviroment match the source environment. As an example, lets assume that I am driving a Live Upgrade from Solaris 10 to one of my Nevada ping pong environments. It seems attractive to just upgrade one of the boot environments as it is - but it doesn't match the source. Some of the time, even most of the time, this will seem to work - but there will be times when drivers will not get replaced and you will have a hard to debug mess. The simple solution is to remember to lumake before you luupgrade.

One last thing - remember to check back to Infodoc 72099 as it does change occasionally - generally as a result of a Solaris release. But do check back to make sure your patch levels are correct.

One last example - my current boot environment is s10u3 and it is running Solaris 10 11/06. I have an nv69 and nv71 running Solaris Express Community Editions 69 and 71. I want to put Solaris 10 8/07 in the nv69 boot enviromnent and call it s10u4. After installing patches 119255-42, 121429-08, 126539-01, and 125419-01, the magic sequence goes something like

# lurename -e nv60 -n s10u4
# lumake -n s10u4
# mount -o ro -F hsfs `lofiadm -a /export/iso/s10u4/solarisdvd.iso` /mnt
# pkgadd -d /mnt/Solaris_10/Product SUNWluu SUNWlur SUNWlucfg
# luupgrade -u -n s10u4 -s /mnt
# luactivate s10u4
# init 0
And when a new Solaris Express Developer Edition or the next update of Solaris 10 is available, just repeat all of the steps above. I hope that this helps you understand an amazing Solaris capability and that you start using this on all of your systems, including your laptop.

Technocrati Tags:

Tuesday Nov 07, 2006

Updated multi-boot disk layout

It's been a while since our Getting Started article on multi-boot disk configurations. The old Solaris bootloader is long gone, in favor of the more flexible (and multiboot friendly) GRUB. ZFS is now available in both Solaris and OpenSolaris. The Branded Zones and Xen OpenSolaris projects are getting more interesting. And in making the best of a difficult situation, a laptop disk failure has given me an opportunity to re-think the original layout.

In addition to the project goals in the original article, here are some new capabilities that I would like to explore.
  • Zones - I want to build zones, and lots of them. Solaris as well as Linux zones.
  • Leverage Live Upgrade so that I can upgrade my system while winging my way across the US
  • Consolidate all of my home directories into a small but useful set
  • Be able to build OpenSolaris on a more regular basis
  • Find a more efficient way of sharing a large Open Source repository across multiple OS instances
  • Start using ZFS for more than just simple demonstrations
  • Reserve some space to play with Linux distributions such as Fedora Core and Ubuntu
  • GNOME 2.16 development builds for OpenSolaris


Before we proceed we need to make sure that the file systems will be sized properly. With only 80GB to work with, we will have to be efficient in a few places (like sharing the Software Companion, Blastwave repository, and Staroffice 8), even at the cost of a bit of complexity.

Estimated disk requirements

Component
Size
Notes
Solaris 10
Entire Software Group
4GB
The main boot environment will include Solaris 10, Staroffice 8, development tools, Software Companion, and Blastwave repository
Software Express (Nevada)
Entire Software Group
5GB
Includes space for GNOME 2.16 development
OpenSolaris Nightly Build 5GB
BFU from Nevada or nightly build from source
Solaris 10
Software Companion
2GB
Installed in Solaris 10 /opt, shared in other installations
Staroffice 8 500MB
Installed in Solaris 10 /opt, shared in other installations
Compilers 1GB
Installed in Solaris 10 /opt, shared in other installations
Blastwave Repository 1GB
Installed in Solaris 10 /opt, shared in other installations
Swap 2GB
Not really needed for Solaris, but will be shared with Linux


Taking all of this into consideration, the new disk layout looks something like this.

Partition
Size
Type
Mount Point
Notes
1
12GB
NTFS
Window XP C:
/xp
Read-only access under Linux using Linux-ntfs kernel modules
No access from Solaris
2
44GB
Solaris UFS
s0 - S10 boot environment (8GB)
s1 - swap
s3 - Nevada boot environment (6GB)
s4 - OpenSolaris boot environment (6GB)
s5 - ZFS slice 1 (2GB)
s6 - ZFS slice 2 (2GB)
s7 - /export (16GB)
Solaris swap partition is available to Linux as /dev/hda9
4
22GB
Extended
N/A

5
4GB
FAT32
Windows XP E:
/pc on Solaris (pcfs) and Linux (vfat)
Device name is /dev/hda5 in Linux and /dev/dsk/c0d0p0:1 in Solaris
6
10GB
Linux (ext3)
/
Linux root (today Fedora Core, may soon be Ubuntu as Fedora Core 5 was a major disappointment)
7
6GB
Linux (ext3)
/export
Shared home directory that can quickly be reused for CentOS 3 (for testing BrandZ things) and Ubuntu releases.


Next time, a short example of using Live Upgrade to install a new version of Solaris Express while running the Solaris 10 daily driver.

Technocrati Tags:

Thursday Oct 13, 2005

A GRUB Configuration for Multiple Solaris Instances

In an earlier article I suggested that it might be possible to have more than one Solaris instance on a system. There are several ways that this can be done, some more supported than others.

Let's start with the supported method - Live Upgrade. Live Upgrade is one of the most useful features in Solaris, and this is the recommended method of maintaining multiple Solaris instances. You get the benefit of being able to share all of your data across all of your OS instances and recovery from upgrades is simple. And did I mention that this is the only supported method ?

It is quite possible that this experiment will take us well outside the design parameters of Live Upgrade, so let's consider some other possibilities.

Platform virtualization software such as VMware would do the trick, but we've placed a constraint of zero cost, and I'm not ready to give up - not yet. The Xen Project shows tremendous promise, but is still early days in development. So let's keep looking.

What does Solaris really require ? A single primary disk partition. That's it. Well, not exactly - there is an additional requirement that it be the only Solaris partition (id=0xbf). I notice that in our disk layout we had 2 available partitions, so if we can play games with partition IDs then maybe we can make this work.

This is where the order of OS installation became important. One feature of GRUB as bootloader is the ability to change partition IDs dynamically. We will use this feature to hide one Solaris instance from the other one. Let's look at how this plays in with the installation process.
  1. Install Windows XP from the vendor recovery discs. This creates one large partition with the Windows bootloader in the Master Boot Record (MBR) and in the boot sector of the first partition. At this point you don't may not notice the significance of this, but you will shortly.
  2. Resize the single NTFS file system (and associated partition) and create the remaining partitions as listed in our table.
  3. Install Solaris 10 3/05. Now we have the Solaris bootloader in the MBR and in the boot sector of the second primary partition. The Solaris bootloader supports chaining which allows you to boot both of the operating systems you have installed so far. It does not support chaining to a logical partition (in the extended partition) so this is as far as we will go with the Solaris 10 bootloader.
  4. Install Fedora Core 4 (or the Linux distribution of your choice). Now we will have GRUB in the MBR as well as the boot sector for the Linux root partition, which is in the extended area. This is what lets us play with partition IDs so that you can fake out the Solaris installer to put your next OS instance in the proper place.
  5. Install Software Express for Solaris in the remaining primary partition. This will put the newboot bootloader in the boot sector of our new Solaris partition but will leave the Linux GRUB in the MBR. This is exactly what we want (for the moment - we will look at how to make use of the OpenSolaris bootloader - but that's another article for a later day).


The last factor of this equation is the GRUB configuration file. This will be introduced in step 4, which will make step 5 possible. For our example configuration, the GRUB configuration file /boot/grub/menu.lst looks like
default=0
timeout=5
splashimage=(hd0,5)/boot/grub/splash.xpm.gz
hiddenmenu

title Fedora Core (2.6.11-1.1369_FC4)
	root (hd0,5)
	kernel /boot/vmlinuz-2.6.11-1.1369_FC4 ro root=LABEL=/ selinux=0
	initrd /boot/initrd-2.6.11-1.1369_FC4.img

title Solaris 10
	parttype (hd0,1) 0x83
	parttype (hd0,2) 0xbf
	root (hd0,2)
	makeactive
	chainloader +1

title Software Express for Solaris
	parttype (hd0,2) 0x83
	parttype (hd0,1) 0xbf
	root (hd0,1)
	makeactive
	chainloader +1

title Windows XP
	rootnoverify (hd0,0)
	chainloader +1


Let's take a closer look at one of the Solaris stanzas to see what's happening.
title Solaris 10
	parttype (hd0,1) 0x83
This sets the ID of the second partition, aka (hd0,1), /dev/hda2, /dev/dsk/c0d0p2) to 0x83 which is a Linux file system. Not only will the current Solaris instance ignore it, but all other operating systems will think that it is proper data and leave it alone. This turns out to be very important for a Linux installer that might misinterpret other partition IDs and do something unexpected, like destroy the data.
	parttype (hd0,2) 0xbf
This sets the ID of our partition to 0xbf which is the new Solaris partition ID. It is now the only one of type 0xbf, so we have met all of our requirements. Very cool!
	root (hd0,2)
	makeactive
These two actions make our partition the active IDE partition which may be required for some of the Solaris 10 boot process.
	chainloader +1
Jump to the bootloader in the boot sector of our partition.

You would expect the stanza for the other Solaris instance to be similar, just swapping partition IDs - and you would be correct.

Why go to all this trouble ? Updates, specifically Fedora Core. At this point we are done with the Windows and Solaris configurations, but not with Fedora. Each kernel update will come with at least three new GRUB stanzas (the uniprocessor kernel, a Xen Dom0 kernel, and a DomU paravirtualized kernel). The RPM post-install scripts will modify /boot/grub/menu.lst, so it is much easier if we make this our primary bootloader.

Enough for now. Next time we'll actually do the first Solaris 10 3/05 installation and look at some of the post-install tasks that will make it easier for the frugal road warrior.

Technocrati Tags:

Sunday Sep 25, 2005

Bootloaders and order of OS installation

There is a tremendous amount of information on bootloaders available on the web. And a lot of the information is good. But some of it isn't, and the assumptions (and limitations that are suggested) can make this a lot more difficult than it needs to be.

So some very quick observations.

GRUB (the Grand Unified Bootloader) is the easiest to use of all of the bootloaders. It provides all that we need to boot Windows, Linux, and both of my Solaris instances. GRUB will be the final bootloader in the master boot record (MBR) when we are finished, but it won't get there right away.

The Windows bootloader, which is in the MBR at the moment, is very well suited to boot Windows. Making it boot other operating systems is almost an unnatural act. I may blog about my experiments in this area in the future, but the short version is that we want to get rid of it as quickly as we can.

Now, the Solaris 10 bootloader is a good intermediate step (which suggests that Solaris might be the next operating system to install). It will boot both Windows and Solaris (well one instance of Solaris) but doesn't work well with Linux distributions in the extended partition. Before we call this a deficiency or bug, we should note that we are now operating well outside of the design center, so a bit of tolerance will help get us through this step.

OK, so we have Windows, we'll do Solaris 10 next, but what about the two Linux instances ? Hmmmm, that's worth a bit of thought.

The Java Desktop System is more of an end user type of system, so things like kernel updates will come out on a regular schedule, but it won't be too frequent. It is also based on SuSE, so new kernels will be symbolically linked to /boot/vmlinux so that the GRUB configuration doesn't change.

Fedora Core, on the other hand, is a rapidly evolving developer snapshot and kernels can be expected quite frequently. And since it was derived from the original Red Hat consumer distribution, the deployment method is to drop in a new kernel (or kernels) and then modify the GRUB (and Lilo) configuration as part of the installation.

Putting all of this together suggest that the most maintainable solution is to end up with the Fedora Core bootloader in the MBR and add the static bits required to boot JDS, Windows, and the two Solaris instances. This would suggest that JDS would be the best choice for installation after Solaris 10. Fedora Core after that - and with some edits to /boot/grub/menu.lst we will be a flexible multi-booting system.

Oh, what about OpenSolaris ? Good question. I have it on good authority (OK, I've already installed it on a couple of systems) that it will leave the MBR alone, so it will go in last. I also know that we will have to do some serious partition manipulation to keep the two Solaris instances out of each others way - and GRUB from Fedora Core will do exactly what we need.

Next time, the Solaris 10 installation.

Technocrati Tags:

Carving up the disk

Now that we have a plan, time to get to work.

Resizing the NTFS partition is our first challenge. Using a commercial partition tool like Partition Magic would violate the prime directive, so let's see what's available in the land of free software.

The GNU project qtparted seems like a good choice. It is a lightweight graphical front end tool that calls back end worker primitives such as resize_ntfs to do the actual work. Since it is a small application it is very well suited for CDROM based Linux distributions such as Knoppix and the System Rescue CD. Since I'm downloading this over my home DSL line, I'll opt for the somewhat more compact System Rescue CD.

note: We have experienced a few troublesome notebook computers in various installfests and the more complete Knoppix distribution helped us solve some tricky issues. If you run into a situation where the CDROM distribution fails to boot (locks up or panics) then try things like passing fb1024 or noacpi, noapic to the kernel.

Before we break out the sharp tools, let's think about one more thing. If I've booted XP on this system, it is likely that either the disk has become fragmented or perhaps something like a pagefile or suspend file may be in a really inconvenient place (like at the end of the partition). qtparted does a good job of adjusting the file system boundaries, but it wont reorganize the file system (since NTFS writes aren't considered safe). So let's boot up XP in safe mode, run a disk defragmentation, and delete the pagefile.

Time to boot the System Rescue CD (or Knoppix if you prefer). Once booted and configured, start up qtparted. For the non-graphical System Rescue CD, there is a shell script called run_qtparted that will start up a minimal graphical environment.

The first task is to resize the NTFS partition. The Toshiba OEM configuration is a single large NTFS partition, so I select it and set the new size to 12GB. Since fragmentation isn't a problem, this operation succeeds. Now you are left with another big decision, do you carve the rest up now, or do it later when you install the various operating systems. If this wasn't such a lab experiment I would suggest that you leave the unallocated storage as free space and let the installers gobble it up as needed. But we're going to push a couple of boundaries here, so I am going to carve up the rest of the disk now. I'll mark all of the partitions (except for the extended) as Linux, but we can change that later.

Now that we have our disk configuration, time to boot back to XP (maybe for the last time) to run a file system check to make sure things are OK. qtparted marks the file system as dirty so this should happen automatically when you reboot, but if not you can start it manually.

Technocrati Tags:

Getting Started

It doesn't matter whether you call it living the Open Source lifestyle or creating a Microsoft-free zone, the notion of build a fully functional mobile user environment out of openly available technologies has its appeal.

Before we get started, let's be clear on a number of requirements. This has to be a real functioning system that can do real work while on the road (which also means that some entertainment value must be found). Some of this is going to be easy, some may require a bit of thought.
So what are the minimum requirements ?
  • A fast graphical display (that uses hardware acceleration)
  • Ability to connect an external projector for presentations and workshops
  • Easy to use Personal Information Management (PIM) tools
  • Document interoperability with the rest of the world
  • Easy user configuration of wireless and wired networking
  • A working DVD player for those long trips across the country
  • A software development environment


Some things that would be nice to get working might include
  • Bluetooth
  • Emulation of other popular application environments (wine, QEMU, Xen, etc)
  • Suspend/resume support

Oh - one very important caveat. I'm not afraid to try some strange unsupported things, so consider this a "do not try this at home" warning. That said, you probably will do some of these things, so I might as well share what worked and what didn't.

Let's see how far we can take this experiment without spending any money.

For this particular example, I am starting with a pretty basic Toshiba Tecra-M2 laptop system (for no other reason than it was readily available). This particular system came with Windows XP Home Edition pre-installed. It also has an 80GB internal disk which should allow some creative configurations.

The first step is to carve up the disk for the various operating systems and data partitions  that we will need. Since I don't know where this is going, flexibility will be the primary requirement.

For my operating systems I have decided to leave Windows XP on, at least for a while. Yes, it's a crutch, but until I get everything working I want to be able to fall back and play a little Alpha Centauri while I work through troublesome spots.

I'm also interested in looking at both Solaris and OpenSolaris, so I'll plan for both.

And I might as well put on a Linux distribution or two - and like XP, the space may well be reclaimed later. For the Linxux distributions I have selected Fedora Core 4 (as my Xen dom0) and the Linux version of the Java Desktop System.

I'm suddenly feeling like 80GB of storage might not be enough.

My disk partition plan is beginning to look like

Partition
Size
Type
Mount Point
Notes
1
12GB
NTFS
Window XP C:
/xp
Read-only access under Linux using Linux-ntfs kernel modules
No access from Solaris
2
12GB
Solaris UFS

s0 - Solaris root (10GB)
s1 - swap (2GB) - available to Linux as  /dev/hda10
3
24GB
Solaris UFS

s0 - Solaris root (12GB)
s1 - swap (2GB) - available to Linux as /dev/hda10
s7 - /export (10GB)
4
30GB
Extended
N/A

5
4GB
FAT32
Windows XP E:
/pc on Solaris and Linux

6
10GB
Linux (ext3)
Fedora Core root

7
6GB
Linux (reiserfs)
Java Desktop root

8
10GB
Linux (ext3)
/export


Ahhhh, but what about Linux swap you ask ?

The Solaris slices are available to Linux, so I will take advantage of that and share swap partitions between Solaris and Linux. It also means that I will have to create the same swap slice in both of my Solaris partitions. As a safeguard, Linux requires that the swap partition be properly formatted, so we will do that later when we install our first Linux distribution.

Good grief, this is starting to sound complicated. You might be saying something like "with something like VMware I don't have to thing about any of this, I just create things and they run." And that might be true, but remember the prime directive - this is to explore just how far we can take commodity system that can be built out of free parts. So VMware is out, but perhaps Xen can perform that role - we'll certainly explore that idea with vigor.

So much for the plan. Next time we'll carve up the disk and get started installing some software.

Technocrati Tags:
About

Bob Netherton is a Principal Sales Consultant for the North American Commercial Hardware group, specializing in Solaris, Virtualization and Engineered Systems. Bob is also a contributing author of Solaris 10 Virtualization Essentials.

This blog will contain information about all three, but primarily focused on topics for Solaris system administrators.

Please follow me on Twitter Facebook or send me email

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today