Thursday Mar 27, 2008

OpenSolaris Community Innovation Awards Program

What? A chance to win big bucks

No, not some silly hunt big whitetail deer like you'd see on Outdoor Channel, but serious Moula - aka beaucoup deneiro - for slinging some code the runs on Solaris. It's the OpenSolaris Community Innovation Awards Contest.

I got sucked in by Jesse, Jim, Teresa, and Alta into this community advocacy project a few weeks back. Alta was the real culprit, though. She's reviewing this device driver white paper (part 1 of 3) that's going on Sun's BigAdmin someday soon, and then suddenly, with 80% of the HTML formatted she said there was this big pot of money and Sun was giving away $1Million. Now, if you're in the EU, or any other foreign country, you're probably yawning and mumbling..."WTF... no biggee, what's a $1Million USD worth these days anyway..."

But to make a long story short... Alta hinted this was a big project... aka...double speak meaning that she's probably holding my whitepaper hostage until we deploy. So I got interested and checked in on the conf call and got pulled into helping drive this contest; anything to speed this up, so eventually she'll get around to finishing the review and edits and putting it up on the web.

In short, the contest is only 1 part in 6 smaller contests. Each part, whether OpenSolaris, NetBeans, OpenOffice, or HPC, or Java, gets around $175K. Added together it's over $1M USD. Each group consults its own community and makes up the rules. Sun insists only primarily on contestants being individuals or teams of individuals (not companies) and that everyone signs the Sun's Contributor Agreement which binds all contributions to the CDDL open source licensing. Beyond that, each group makes up their own schemes for judging and payout. Sun's legal kicks in resources just to make sure the rules are legal and can be fulfilled in all the participating countries that are eligible.

For OpenSolaris, the Contest is seeking code and non-code entries that run on OpenSolaris, play on OpenSolaris, that advocate and help others use OpenSolaris... just about anything that's goodness earth-mother-apple-pie kinda stuff related to OpenSolaris. The prizes aren't bad. The team decided to break things up. $100K for the contest and $75K for undergrad university research. For the contest, top prize is around $30K I think, with 3 x $15K second level prizes, and 25 x $1K prizes. Which means if anyone is one of 29 contestants that actually submit anything, they could win some money. On the University Grant side, (which is opening soon), the prizes are anywhere from $1K - $5K and designed to fund a small research project or augment something bigger. The prizes are given out for proposals, and not actually the results, although winners who take money are obligated to give a final summary (short one) on their results.

The registration process

Due to shortage of web resources, I got picked to offer my time on nights and weekends to building the registration. We managed, after quite some hassle to obtain a solaris zone inside Sun's firewall, and stage the pages there. Eventually, Sun's IT team would assign the "" domain to the Zone and we'd be live. Jim Grizansio gives me some credit for putting up the site. It was simply a bunch of jsps and servlets I've had for about 5 years which I developed for my school's PTA website I created a simply flat file database which stores entries encrypted in separate files. That way, if the ISP isn't all that safe, at least the data is pretty useless to the hacker unless they have a strong Math and decompiler background and can figure out my storage system (which stripes part of the key remotely and is fetched at server startup). Anyway, I hope the interface isn't to geeky. The real designer for the UI, I found out, was Derek Cicero and possibly others. They helped with all the standard CSS stuff that would take me forever to format in stock HTML. Coding is easy. UI is hard.

Hopefully people won't balk at the UI. I apologize in advance. But give the contest a try. Registration isn't too difficult. I hope it's logical. Steps are the following:

  1. Go to and register on that system. It's separate from the main site for a number of reasons which I won't explain here. But just register as a user.
  2. Next, if you have an idea for a contest entry, you can submit it and it will be stored along with your profile.
  3. You have a few options to select if you want others to see your private information or not, or if others can join or not. But as a community effort, even if you don't publish your name and contact info, the title and brief description of your submission are posted on the list.
  4. When you're ready, you can upload your wad. We support upto 10MB as wad of stuff. Standard tarballs are accepted (e.g. .zip, .tgz, .tar, etc.). If it's bigger than that, then you can submit a URL to us.
The system will allow you to edit, clobber and refine your submission, entry, registration data up until the deadline for submitting the actual contest entry. So you can visit early and tweak often. Hopefully, if you have the initiative and are one of the favoured 29, you'll win some money.

Friday Sep 28, 2007

Intel D201GLY mini-ITX - a Low Cost Solaris Solution

Another Low-Cost, Small-Footprint - Home/Office Computer

Last week, in the midsts of rambling on about the issues in SXDE 9/07, I briefly mentioned that I had an Intel D201GLY mini-ITX board. The board has been around probably for a while. I think their may have been some OEM marketed version available to volume system builders earlier. But probably sometime in the first half of 2007, the Intel D201GLY became available online for folks like me who buy retail.

Fig. 1 Intel D201GLY mini-ITX motherboard.

For such a small form factor board, the retail price seems to vary between $60 - $80, which is a bargain, if you consider that this is for motherboard with audio, video and LAN, plus on-board CPU. Even the older VIA Epia 800 first generation mini-ITX system boards still cost around $95 - $120 online. But this Intel board has a power efficient Celeron M 215 at 1.3GHz, a single DDR2 533 slot, single IDE pin header, 2 sets of USB headers, 1 set of internal front audio pin headers, but no SATA jacks. However, while the board looks clean and well built by Intel, the chipset is the SiS662/964L combination with Mirage 1 graphics and SiS900 10/100 Fast ethernet.

I've spent the last week twiddling at home during spare time trying different cases to put this board into as well as measuring power and getting Solaris to work well on this. Below, I go through some of the configurations and settings on how I got this to work.

Picking Out A Case

Mini-ITX motherboards will fit Flex-ATX, Micro-ATX, XPC and ATX cases in general. But the real attraction is to be able to stick the board into a small, very appealing case that makes people go, "Wow! a tiny computer!" when they see your computer. The only concern with this Intel board is that the heat sink and fan over the CPU (which is soldered on BGA packaging) is the height. It's about 1.8 - 1.9 inches tall if I lay the mobo on a flat surface. That's taller than 1U, so a rackmount case like a 1U 1/2 width, short depth box could work if I could find a 40mm replacement copper heat sink and thin 40x40x10mm fan. However, with no changes to the motherboard/heat sink/fan, the board does fit nicely into both the Casetronic 2677 and 2699R mini-ITX cases. There's just about 3 - 4mm of clearance under the case covers, and if heat is a worry, I can always drill holes in the top cover in the area just over the cpu/fan.

I like the 2699R case better because it has front panel USB and audio jacks, which are supported by the pin headers on the motherboard. The 2677 also is problematic; as an older model, its ATX extension cable isn't long enough to reach across to the rear of the board where the power pins are. An inexpensive ATX extension cable is required. The cases sells for around $65-$75 online and include a big external brick-style AC adapter (like for a laptop) and a 12VDC converter daughter-board that then provides a 20 pin ATX connector to the motherboard. The daughter-board power supply has a big advantage in that it can be about 94 - 96% efficient at converting 12VDC into usable system power. Typical power supplies inside PCs are less than 70% efficient at full power, and when the board only draws 25 watts, the P/S may actually be sucking 50Watts or double the power.

Fig. 2. Casetronic 2699R mini ITX chassis.

If the price is a bit high, then it's still possible to find some places that sell the Englight 7396AM1 for just $20 plus shipping. It's a very high quality case at a budget price and it's bigger, but still smaller than most mini Tower cases.

Drives and Cables

Both my picks for cases, the Casetronic 2699R and the Enlight 7396AM1 have support for just a slim optical drive, one floppy or USB multi-function card reader, and a 3.5 inch hard drive. The NEC/Sony Opti-arc 8X DVD burner is a low-cost slim drive which runs around $50 - $60 online. There are NEC/Sony and Samsung versions of the CDRW/DVD combo slim drive which work similarly for around $40 - $45. Whichever one you chose, you'll need to get a short adapter that connects the IDE slim optical drive modular jack to a standard 40 pin IDE plus power. The unfortunate thing that Intel didn't do was to put SATA drive support. It would have reduced cable and connector crowding, especially with the Casetronic case. In addition, the optical drive bay butts closely to the cpu heatsink/fan. There isn't much room for the slim drive, the adapter and the 40 pin cable so the adapter must be pretty low-profile. (Note: I bought a half dozen on eBay from some vendor in Hong Kong - service was good, but the adapters were poorly made and the pin headers stick out the reverse side of the PCB and it's possible that during the screw tightening, the pins can touch and short the back of the drive chassis and destroy the drive. I recommend some metal clippers or diagonal cutters to find and cut the protruding points down. I also recommend using a piece of masking on the back edge of the slim drive to protect it from shorting out.

The 3.5 inch hard disk is just a standard IDE drive. I can shave about 3 - 5 more watts by going to a 2.5 inch laptop drive. This requires a 44-pin to 40-pin IDE adapter for the conversion. The Casetronic 2699R has pre-drilled holes to mount a small 2.5 inch hard drive, while the Enlight case does not. But the latter is much larger and includes a disk tray with rubber grommets which provide shock support for the drive.

Note: The Sony/NEC Opti-arc DVD burner appears to be set, in firmware to IDE high - meaning it wants to be the primary device on any cable. Because the board only has a single 40 pin IDE header, this means the NEC/Sony must be the primary, and the hard disk must be a slave drive on the same cable. The simplest solution is to use the cable-select jumper setting on the hard drive and make sure it's the 2nd IDE plug stuck into the drive on the ribbon cable, and the first IDE plug is in the slim DVD drive. Otherwise, it may be hard for the system to discover which drive is primary and which is secondary.

Note 2: The Enlight case is for a MicroATX motherboard. The mini-ITX is quite a bit smaller still and the USB front panel connector cables, as well as the standard IDE ribbon cables are a bit too short to reach over from the front of the case or the hard drive to the motherboard socket. I solved this problem by ordering internal USB pin header extension cable. This adds about 10 - 12 inches of reach to connect the front panel USB.

Solaris SXDE 9/07 installation

Installation wasn't too bad, except for the Xorg noise and streaks on the screen. As I found out, the installer is using the sis Xorg module. It works for the prevous SiS 661 Northbridge, but not here on the SiS662. Luckily, others have run into this problem on Linux and the recommendation there was to switch from using the sis module to the Vesa module in Xorg. The way to do this is to simply run, in Nevada, the xorgcfg graphical configuration utility, edit the properties for the graphics card and choose "vesa' for the module, then logout and log back in.

# /usr/X11/bin/xorgcfg

The LAN uses Murayama's free SiS fast ethernet (sfe) driver. Installation was quick and painless because I keep a CDROM disk with most of the free drivers for Solaris on it. But we could have put this on a USB jump drive as well.

Note: The default with SXDE 9/07 is to use Network AutoMagic. To set a static address, you can use the GUI tool in the Gnome SysAdmin menu. But that doesn't work if nwam is managing the interface. So I run:

# svcadm disable network/physical:nwam
# svcadm enable network/physical:default
Then I can use the graphical network configuration tool.

And with audio, the onboard SiS AC'97 chip (pci1039,7012) isn't supported by the default Solaris drivers. Here, I use Juergen Keil's free Solaris audio drivers. Interestingly, I tried to pkgadd the wad of packages, but the actual drivers refused to be copied over for some reason. Instead, I ended up rebuilding the packages from source and using a manual make reallyinstall inside each pkg directory and installing from the command line. I was able to confirm the install put the driver into /platform/i86pc/kernel/drv/audioi810. The post-install was a bit more problematic. It tries to run add_drv with a list of hardware pci-Device-IDs. Some collide with other existing and supported AC'97 devices. I manually edited the add_drv command inside the Makefile and was then able to add_drv for just the audioi810 module for just the SiS 7012 audio controller.

Power Consumption and Usability and Final Thoughts

With a few more of my standard software pkgs installed, then a cheap laser printer hooked up, and security, IPfilters, backup scripts and cron jobs to manage the system, the small box is making a nice home/office workstation. It's great for word processing, spread sheets, light dig cam editing, web browsing and email. My wife liked it immediately because it had the latest browser fixes to JavaScript that eliminate a funny popup 2nd Mortgage offer bubbles she was seeing when accessing several Citibank websites to pay our bills.

I like the fact that the system with the small Casetronic case, only draws from 26 - 33 Watts. It's got fairly good performance for rendering pictures and processing scaling operations and ripping MP3 audio. It's quite a bit faster than the older VIA c3 systems like my Epia 800 box. Also, Intel has implemented some nice features such as temperature sensitive voltage control on the case fan 3-pin power plug. So the case fan isn't always noisy. Only when the case gets hot, or briefly, it gets loud just for a second during a reboot.

Relative to the older Epia 800, which uses between 13 - 22 Watts, the Intel d201gly isn't quite as efficient. But it is much faster and gives the newer VIA c7/cn700 systems some competition. I've tested FlexATX form-factor PCChips v21G, and in an efficient enclosure with DC power supply, that c7 mobo averages 24 - 35 Watts and it still feels like it's a tad faster, and it should be since its c7 cpu is usually clocked faster at 1.5 GHz. But this Intel board is still fairly green and a bit quieter for those folks who want Intel build quality. For my wife and I, we feel like we're splurging, using double the power.

Note: The same Intel d201gly board stuck inside the Enlight case with conventional TFX12V power supply draws 49 - 62 Watts! The difference is completely due to the inefficiency in the power supply. I've priced a small solid-state 80Watt tiny ATX DC power supply with AC brick-style adapter. A conversion kit may cost between $40 - $80 and this would put a 94% or better efficiency DC power supply into the same box. But could save about $40/yr in electricity if left on 24 hrs/day so it'll pay for itself in a year or two and reduce power consumption further.

Total cost was: $75 mobo + $80 case p/s + $40 hdd + $55 dvdrw + $39 1GBDDR2 + $15 cables = $304.

Friday Sep 21, 2007

SXDE 9/07 Installfest and the Work Around

Release of SXDE 9/07 Edition is Immiment

Last week, I wrote about all the gory details about my recent salmon fishing trip up north in BC. I think I mentioned that I had also managed to bring along the latest Solaris Developer Express Release (SXDE 9/07) to upgrade a few machines while I was up there. In this edition of my blog, I thought it might be instructive to note some things I had to do with some platforms to get it working - not only for public consumption, but so that I'd remember what to check for in the next few revs of Nevada to see if those things changed.

Background on SXDE 9/07

For those of you who download and get Nevada regularly, this section is old stuff. But I find it instructive to give a broad picture, As I've seen it from the inside - on where SXDE came from. As far as I can recall, the existing SXDE release evolved from the original Solaris Express program. It's still basically the same thing, just re-branded Nevada (aka Solaris 11, aka Solaris Next, aka Open Solaris -sorta). The older Solaris Express was a roughly monthly or bi-monthly snapshot of stable bits for Solaris Nevada which is the next version of Solaris that is in development. Because of a typical 2-week build cycle for Nevada, the older Solaris Express was basically the most stable of each group of 3 - 4 builds that our volume developer program folks would push out for public download.

The SXDE rebranding did two things. First, it stretched out the interval to every 3 - 4 months for a developer release, which was more reasonable than every 6 - 8 weeks, since that's hardly enough time to evaluate the release itself and test compile a bunch of apps. The second thing SXDE did was to bundle developer tools installation into a new installation script that saves the end user from yet registering and downloading another wad of stuff with the compilers, NetBeans and all the other Sun Studio Tools.

Recently, some folks may have heard about Project Indiana - aka the new "Open Solaris." I'm stealing the "OpenSolaris" moniker in this blog just for the time being and I interchange it with Nevada - Solaris 11. But I should qualify for readers that the official "Open Solaris" will change and evolve as a new distribution under Project Indiana. The program is being run by in the Community and headed up by new Sun employee, and Former Debian Linux Guy, Ian Murdock. It's great to have Ian as the new project Team for "Open Solaris." His leadership in the Linux community in the early days gives him a perspective on the needs and wants of Linux users that many Enterprise Solaris Management folks don't have. I'm sure there are other blogs and message boards that Ian can personally respond about Project Indiana. But expect him to change and shape what is OpenSolaris now and take it to a more desirable place for endusers and developers alike. Also expect to have avenues of participation that can change the way Solaris gets distributed. This may be an opportunity to take a look-see at OpenSolaris.ORG.

Getting back to this next release of SXDE 9/07, we will find that it adds a few more features, fixes more bugs, and comes with a nice looking new GUI installer. It is based on Solaris Nevada build 70, but has undergone two more interations (build 70a and then finally build 70b) before getting the full product team endorsement to release.

InstallFest Imperative

Internally, before we release any version of Solaris, upper management likes to encourage all software engineers to download the pre-release bits and test it on presumably our own hardware. As part of our jobs, we already do quite a bit of sanity testing before the bits go out for every build. But being the engineers we are, most of us don't actually do the normal install that the regular users out there would use. Nope. We do the Big-Friggin' Update (aka BFU, aka Bonwick-Faulkner Update, aka Blindingly Fast Update). BFU is a process that basically flashes and clobbers the bits onto your running system (with some other cool technical details omitted) in just a few minutes.

Usually, BFUs are fast and friendly and before we know it, we've upgraded a system, making it ready for more Solaris development. And because we use BFU's, we probably don't do as much work on testing the actual installer our customers use and we avoid a lot of the idiosyncracies that exist only in the installer. So why not make this a standard? Well, BFUs aren't pretty when they don't complete safely or cleanly. Any good Solaris sysadmin worth his/her salt should be able to figure out how to recover. But for most users, this is probably an unexceptable risk/burden and so we do recommend using the actual installer, even though we don't use it ourselves. Just suffice it to say to, "Do as we say, not as we [Solaris engineering] do."

Upper Management aren't blind to this irony. Appropriately, they've tried to put their feet down and insisted that all internal engineers actually do an installation from optical media and file any bugs on the upcoming SXDE versions and fill out a feedback form. But this is high-tech corporate America. Management putting its foot down doesn't mean much in terms of threats; the stick simply doesn't work. Instead, they dangled a carrot of possible raffle for prizes to internal employees who complete an install and then fill out a survey. The carrot has been neither a sure-catch carrot (i.e. not everyone wins in a raffle), nor was it a fat carrot (i.e. the prizes weren't very expensive either). While I would have appreciated a new Sony TX-series ultra-portable notebook as a grand prize, the product team was giving away old junk, like an old Java tote-bag, or a whimpy 1XL tshirt from last year's trade show or maybe a baseball cap was all. Unbelievably, despite the dearth of good prizes, I found myself caught up in the Jackpot-Fever of the raffle even though I only fit 3XLT Tshirts!

For the first SXDE, I was announced the internal winner of the Most-Installfest-Submissions award. No gifts even got distributed (at least I never received any), except maybe I got a laser printable certificate in my emailbox, which I had to print out myself. Yeah, it's sort of hokey - but times are tough, and I'd rather keep the stock up from the days when it was just $3/share. But it shouldn't have been much of a surprise to folks that I would have the most. I have the most cheap x86 boxes of any engineer I know, except maybe for Brian Dowdy (i.e. think of us as "Hardware-Hos"). so, for this second\* SXDE Installfest, I again came out on top with Most-submissions. (\* We did have plans to do a second SXDE around the Nevada b64 timeframe, but decided to wait until b70.)

This time around, I did get some prizes. Like two XL Tshirts. I don't fit, but maybe in 10 years, my son will grow big enough, unless his younger sister out-eats and outgrows him. I also received a rich, corinthian leather notepad/portfolio thingy (say that with Ricardo Montalban's accent) which holds some paper and pens. I'm not sure what I would do with this since most everything I write these days goes through a keyboard. I didn't even use it to take notes on the installation issues I discovered. But if they gave me that tiny/mini Sony TX-series 2.7 lbs subnotebook w/ 2GB memory and 32GB of boot flash... maybe I'd double my efforts!

Installation Idiosyncratic

With any new version of Solaris, there's a debate as to whether to upgrade or do a fresh install. My testing philosophy is simply. I have spare disks around. For testing, I yank out the mission critical data, set that aside, stick a new disk inside probably with Solaris already on there, and then a) upgrade when I can, then b) after successful or unsuccessful upgrade, I reinstall from scratch.

Upgrade or Fresh Install?

For those folks that have done upgrades before, we know there are two major problems with upgrades. A) Not all things get upgraded. Some packages aren't removed and some files aren't replaced. Some new stuff doesn't get installed. There are ways to find out internally what the upgrade will clobber and add, but in essence, it gets complicated to find out what should have been upgraded but didn't. The only way to be safe is to do a fresh install. The other big problem is that B) the Upgrade is very slow. The installation console claims it may take 2 hrs or longer, but in reality, unless you have a honkingly-fast-storage and processor, the upgrade install takes as much as 6 hours or more to complete. Especially if you're trying to upgrade an older system. Typically, I can do a fresh install in just 40 minutes or less.

The solution I've come up with is to simply put all my data on partition slices which I'll preserve. Then do a fresh install on the root (/) slices. This helps save quite a bit of configuration and reinstallation of apps. In other words, I put most stuff on the /export and /opt slices and only clobber the root (/). The only problem though is that SXDE needs to clobber the devtools in /opt, and it clobbers the package tracking for all the extra value freeware I've added, so we can't really use the packaging mechanism to remove a package any longer. To re-install and configure all that stuff takes time again. I live with this solution since I don't usually remove freeware packages, and many I compile from scratch and don't install by pkgadd(1M). But this might be a dilemma for some. In which case, the upgrade path might be better. Regardless, I recommend keeping home-directory data on a separate slice and to preserve it.

This new SXDE presents some issues with upgrade or fresh install that folks should be aware of. A) the new GUI makes it easy to install from scratch, but before you realize you've clicked all the buttons, you may have skipped the steps needed to preserve partitions on your disk. If you'd like to take more time, and not use the new Dwarf Caiman installer, you may want to simply use the older installer (i.e. option 2: Solaris Express). B) The new SXDE installation has issues with older disk slicing and number and may not be able to upgrade you. This was supposedly fixed in build 70b, but you may still encounter this issue that the disk partitioning can't be upgraded and the GUI may only allow you to do a fresh install with fresh partitioning that will wipe your disk. The unsavoury solution is to rsync your data to another disk somewhere, do a fresh install, then rsync back. I have about 25 GB of data for myself and my wife at home. It takes about 1 episode of Stargate SG-1 to move that data off the install disk. This is still faster than doing an upgrade, and I've got scripts that restore my data and some server and network configuration too. Another option is to stick 2 disks in the box and mount it later. But I don't like the extra cost of 7Watts to power another drive. A single drive, and better yet, a single notebook drive, is what I prefer on my home system for improved acoustics and thermal power dissipation. Not to mention, I pay less for electricity with my systems always on.

Graphics Not Working for VIA Unichrome/CastleRock

SXDE and Solaris in general use Xorg. We're following the distro fairly closely these days and that's added support for more and more graphics adapters. But it's also dropped support. One area of support is for the older VIA CastleRock/Unichrome (CLE266 chipset and prior generations) onboard graphics. When I tell people internally that I actually use the onboard graphics, some scoff at me. They suggest I actually blow another 20+ Watts and get a cheap ATI Radeon S7000 or nVidia card (some which suck another 50+Watts by themselves). And maybe that would work. But there is a bug filed against this problem internally for VIA embedded graphics. I used to have a workaround for the VIA unichrome issue, which was to take the buggy S10u1 SUNWgraphics-ddx package and unbundle it, and just extract out the file and copy it into /usr/X11/lib/modules/drives/ and clobber the existing module. This worked through Nevada build60-something. But since build 63 or 65, that no longer has ABI compatibility. I now get core dumps. This is true even with S10 8/07. Luckily, there are cheap VIA c7/cn700 chipset boards these days that have Unichrome Pro graphics which are supported extremely well. I can't say that about the SiS Mirage graphics. I have an older SiS 741 chipset with Mirage graphics which worked okay with Xorg, around Nevada build 55 time frame (SXDE 1), but with the latest Xorg, the pixel quality at native resolution is really blurry, and on my Intel D201GLY mini-ITX board, the graphics are unusable beyond 800x600 pixels. However, with Fedora Core Linux and Windows, they've seemed to have solved both the VIA and SiS graphics issue in their version of X distro. My solution for now, is that these are relegated for headless/server use and not for home use. But for SXDE2, the GUI installer requires the graphics. Otherwise, you're back to booting option 2 - the old Solaris Express installer, which will fail or be unacceptable, at which time, I'd recommend just installing using the text console.

Network Automagic

Many of us have been trying the new NWAM (NetWork Auto-Magic) feature. It's controlled through the SMF. You svcadm disable network/physical:default and enable the network/physical:nwam. This works well, I hear, from a number of folks. They all pretty much have a single primary ethernet wired interface and it's on a laptop that uses DHCP. This is where NWAM works now. But it seems like since build 60-something of Nevada, the folks support the standard DHCP for ethernet interfaces have fixed some long standing bugs. For years now, many of us inside guys have been complaining and filing RFEs against the ifconfig(1M) command in Solaris. Simply, it was moronic to ignore the standard DHCP fields for DNS under most conditions. Instead, we focused on getting DHCP to work inside our Sun-centric NIS environment. DHCP would work most of the time and plumb /etc/resolv.conf file and alter the /etc/nsswitch.conf files under the right NIS network conditions. But for some reason, it would rarely work when doing DHCP over a standard connection. Well, since b60-something, this appears to have been fixed. I'm not sure if the NWAM guys had a hand in this, but thanks to whomever. There are some outstanding issues still when it's a WiFi interface. For some reason, it may be a GUI interaction with the free inetmenu application folks are using and download from OpenSolaris.ORG. Not sure. But it used to work and plumb the routes on WiFi connections, but it may not with SXDE. If it isn't working, I've usually helped many folks get up and running for a session by a) check the routing using netstat -rn and seeing if there is a defaultroute. Next, I check to see if the /etc/nsswitch.conf file has hosts: and ipnodes: using the DNS nameserver. There should be a "dns" following "files" in the nsswitch.conf file. If not, you should be able to append to those lines and get it working. Please file a bug at the SXDE community site.

VIA c3 Panic on Shmem pagesize and re-init()

Okay, for those of you who love VIA c3 and enjoy running servers at 13 Watts total power with Solaris, well, better stick to S10 update 4 (8/07) or Nevada b65 and prior. We put something in around the b68 or b69 timeframe that I think, does some kind of dynamic shared memory page size and inquires with the system to set this. If I recall the bug report, the VIA c3 doesn't support this, so it panics. This was supposedly fixed and putback into build 72 of Nevada, but it seems to only work for newer Nehemiah based systems. If I run the latest SXDE on Samuel or Ezra cores, the panic is now fixed, but the init() crashes on fatal signal 9 and restarts a gazillion times for my epia 800 system. I've added comments to the internal bug about this where they claim b72 had the fix and I tried it and it was still no go. But hopefully, it won't be long before we have that available. But for your SXDE users with older Epia systems with VIA c3/non Nehemiah, please refrain from upgrading.

Older Intel 815 graphics - Newer ICH9 945 Graphics

I've tested SXDE on both an older Compaq box with 1GHz P3 and embedded 815 graphics. I used to have to stick an nVidia Riva TNT graphics board in there. (I got a half dozen of these older 16 and 32 MB AGP 2X cards, some low-pro for some AMD Geode bookpc systems I have) for just $5 from But they do add to the power profile of the box. So I try to remove the card and see if it works. Amazingly, it appears to now work with Intel 815 graphics, and I can save another 6 -10 watts steady state. I haven't tested the 815 graphics support on the older Intel OEM D815EEA and D815EEA2 boards, which I have a small stash of from other boxes, although, I did try around b69 time frame at it was still broken.

I have some older Intel Bearlake test systems for the ICH9 chipsets that have onboard 945 graphics as well, and SXDE works flawlessly. I have some older Dell 2001FP LCD monitors that can't handle true 60Hz refresh at 1600x1200 24-bit. I've either had to lower the bit depth to 16-bit or lower the resolution to non-native. The latest SXDE is pretty good and syncs up to 50Hz refresh at 24-bit native. This collaboration with Intel is going great as we're seeing more and more support for native Intel chipsets on our platforms.

Network Settings/IPfilters/IPSEC Upgrade from S10 and early Nevada

If you run SXDE1 (build 55b) or Solaris 10 update 4 (August/07) and you upgrade to SXDE 9/07, there is a pretty high change, if you were running IPSEC security, like Punchin, or you were using IPFilters, that the upgrade will hose your settings. Two things have changed. The IKE (internet Key Encryption) service as set by SMF requires a few configuration changes. If you were using IPSEC Punchin, which we use internally at Sun for tele-commuting access into Sun's internel networks, like I'm doing now, then this requires a bunch of changes. Most of the installation handles it for you, but the PunchIn packages we use must be upgraded to v 2.x and the certs need some upgrading to the latest. I had to pkgrm the old SUNWpunchin and affiliate certificate pkgs and reinstall the new ones. Before uninstalling, it's useful to run a /usr/local/bin/client_backup to back up the local certs. Luckily, the new service supports the older legacy backup, so a client_restore against the older certificate backup, will re-install and sync the keys up correctly. I usually test the punchin again, and if everything is kosher, I run client_backup again and save the new format of the cert-wad-of-stuff.

One issue folks may have with upgrading from S10 u4 or early is that for some reason, the /etc/ipf/pfil.ap file gets blown away sometimes. And without the presence of this file, the new SXDE gives all sorts of SMF start-up errors. They're basically harmless, but if you're like me and run your machines both for local access and for tunneled access, I run IPFilters and TCP Wrappers on all systems, even my laptop. The SMF warnings are disconcerting and even more worrisome if your filters aren't active. It's like streaking around in public inviting any evil spirit to give you an STD. Anyway, the simply solution is to login to some other Solaris box and copy over the /etc/ipf/pfil.ap file and reconfigure for your interface and reboot. The nasty messages go away, and hopefully, your ipfilters will report they are up and enabled, and your log file (if you log attempts at incursion) should show you that packets are being denied access.

Oh, and one more cool utility in SXDE 9/07 is the latest Network setting GUI in the administration tools for the Gnome Desktop. The only problem is that it doesn't properly set the /etc/nsswitch.conf file either. So no DNS, even though the GUI has a DNS server tab. Solution is the hand-edit the /etc/nsswitch.conf file again and put in dns for hosts: and ipnodes: then it should work.

Post Install - Post Login Setup

If rev'ving from an old version of Nevada or S10 and the home directory hasn't changed, the first time a user logs in, SXDE will actually spend a good minute perusing all the gnome files and what not and try to re-instate the old Gnome config you had, with the latest SXDE version of Gnome. Clearly, if you were using S10 Mozilla, and now you've got stuff in Firefox, that won't carry over quite correctly. Thurderbird will try to import settings over. But overall, the process can take upwards of 2 minutes while the screen sits there black and there's a small progress bar thingy that swings back and forth and doesn't actually tell you what the progress is. But I have yet to have the process fail. It can just take a hell of a long time - long enough to maybe go take a coffee or bathroom break, head into the garage to get a sledge-hammer, comeback, think about taking a sledge hammer to the machine. But don't do it. It will complete and hopefully, you'll be a more patient person for it. The next login isn't too bad. Of course, if you're on a QUAD core box with two sockets, this probably will never be a problem, since either, it'll happen very quickly, or you'll be running headless as a server anyways. But if you're like me and run older, slower, low-power boxes, then it can be a test of patience.

Brother HL-2040 Printer Installation

In SXDE 1, I had to do all sorts of tricks and installed CUPS freeware to finally getting printing to work. This was really disappointing because I saw that killer sale for the Brother HL-2040 again for $59 after rebate and couldn't help myself. I had my buddy buy one also for a grand total of 3 laser printers. 22 ppm monochrome, with a 1500 page cartidge is, well, dirt cheap.

Not with SXDE 9/07. I have finally retired the last Linux print server/internal backup server box in my house. I have one last box to retire and Linux will no longer run anything internally. That;s because I took the USB cable out of my laser printer, plugged it in, and while logged into Gnome, I got a pop-up notifying me that the printer was up and available and enabled. I opened the browser, went to a home page, and sent out a print job. Seconds later, the printer just works. This is now working with the USB subsystems for VIA, Intel, SiS and nVidia MCP chipsets on all my systems. Only issue is that I heard there's a Parallel port bug that prevents the OS from seeing any ECP/EPP ports. That's being addressed, maybe in b74 timeframe. But that's one old printer if you're still using Parallel. Even my old Epson 880 has USB. But it just ran out of ink, so I haven't tested it on Solaris. I'm just stoked that my Brother Laser printer is finally working and it does so transparently. So props to the Solaris printing folks upstairs for all their hard work. I still need to test my battery of Epson printers, but that will come in due spare time.

I sure there are lots more things open to improvement. I'll continue to re-install the next versions and keep testing. I'm still not quite over trauma of the 700+MB requirement for Solaris graphical install. But with memory and motherboards so cheap, I'm having a good retail therapy shopping for more hardware. Only what do I do with the old stuff? Everytime I look at my pile of old stuff, it's hard to say goodbye to some good friends that have served me well, and could still handle lots of tasks. I'd like to donate the stuff maybe, or perhaps try to work on a custom distro with small footprint and installer that uses less than 250MBs and uses XFCE or something like that. Or maybe in the future, that might be a goal of the new OpenSolaris distribution.

Tuesday Feb 06, 2007

Getting Solaris x86 Graphics, NIC and Audio working on the ECS GeForce 6100SM-M

There was a recent sale at Fry's on CPU+ mobo combo. This was for an Athlon 64 x2 3800+ 65W AA processor retail kit with an ECS GeForce6100SM-M motherboard - all for $139 before tax. The motherboard is based on the nVidia MCP61 (nForce 405) chipset with socket 939, PCI-express x16 and x1 slots, DDR2 800 memory slots, the standard I/O for disks and floppy and USB, onboard nVidia GeForce 6100 graphics, built-in nVidia 10/100 Fast Ethernet and High-Def Audio with a ALC 660 Codec.

Getting Solaris to run well on this system wasn't the easiest thing. While GeForce 6100 is a supported graphics chip, the version on this board wasn't recognized by the bundled 'nv' driver which we collaborate closely with nVidia to get done with the most recently drop dated January 2007, just weeks ago. The on-board NIC was a strange device with what looked like a 10/100/1000 capable PHY part but a chip (pci10de,3ef) only capable of 10/100 Fast Ethernet speeds. And the audio was another hybrid of sorts that used the MCP61 HD Audio controller coupled with a Realtek ALC660 codec. The controller seems to function to the Intel HD Audio spec and therefore similar to all the other nVidia Azalia (codename for HD Audio) controller. But the codec was a cheaper version of 5.1 audio, as opposed to the standard 7.1 Surround Audio for most HD Audio Codecs (such as the ALC880, 882, 885, etc.). And as expected, the back audio I/O ports included only a single column of 3 jacks for Line In, Mic, and Line Out.

Using the VESA graphics driver with 16 bits.

The GeForce 6100 graphics wasn't suffering the usual errors and exit issues that'd I'd expect from a unsupported card. Instead, it was starting and the system thought it was starting, but my 20 inch flat panel was complaining that the signal was at a mode unsupported by the monitor and the monitor would blank at that point. To get the Graphics working, I examined the /var/log/Xorg.0.log file and found that the actual startup of graphics fell through the 'nv' driver due to errors caused by missing modules for GLX and, instead, loaded the VESA module. Even though the BIOS was set to share 64MB of memory with Graphics, I couldn't get the VESA driver to display the native 1600x1200 resolution of my 20 inch flat panel. The command line options are in the Xorg.0.log file but I had assumed everything was kosher.

I managed to keep the system in command-line mode and manually fired up Xorg by typing: /usr/X11/bin/Xorg, and suddenly, X came up in a standard grey, houndstooth style screen. But if I executed /usr/X11/bin/X (which is a script that sets up the Xserver that then calls Xorg) then it fails. I put a "set -x" line into the /usr/X11/bin/X script and observed how it started and discovered quite a bit of initial checking to discover the default bit-depth of the screen. Without any environment settings, the system defaults to 24-bits per pixel and formats a command-line with a number of options that includes the bit depth. This is then passed to Xorg. so the difference between calling Xorg with no args and X was the set of options. Using the process of elimination, I then determined that without -depth 24, the X script also can start the Xorg server and does so using a default 16 bpp for 1600x1200 resolution. I could actually get the VESA driver to do 24 bpp with 1280x1024, but that was far more blurry and not the native resolution. So I decided to correct the /usr/X11/bin/X script and add a line prior to the initialization of the Xserver setting the DefaultDepth to 16bpp. This allowed the GUI to come up in native resolution with 32k colours. This is good enough for most uses, but application popup menus are colored incorrectly when overlapping another window under 16 bpp. It's tolerable and still works, but I'd prefer 24 bpp and 1600x1200 resolution support.

HD Audio modifications

We have some minimalist HD audio support currently shipping with Solaris 10 update 3 and Open Solaris. The driver actually supports a growing list of Codecs from Realtek, Analog Devices and Sigmatel. The architecture of HD audio is different from the old AC'97. Both specs are from Intel, but the newer HD Audio spec isn't designed as a superset of AC'97 features. It is a new device architecture that separates controller from codec. So the MCP61 controller appeared to be much like all the previous nVidia Azalia controllers. In fact, by adding the line: "audiohd "pci10de,3f0" to the /etc/driver_aliases file, and the running "update_drv audiohd" and "devfsadm" then rebooting, the audiohd module in Solaris does load and almost attaches. It errors out however because inside the driver module, there is a codec initialization routine that fails because the codec isn't recognized.

The funny thing about this ALC660 codec is that it has 5.1 channels, as opposed to 7.1 for most HD Audio codecs. It's a short cut that removes some of the audio output pins. I thought perhaps that this was probably very similar to ALC880 and ACL882 codecs and I could probably tickle the same pins using the same code. Only, there'd be one fewer pair of pins. While the risk of frying the part can exist if you configure it incorrectly in a driver, the chance of that really happening was slim and something I guessed was worth it, if I could even get the system to play just a noise.

Within the codec initialization in the Solaris audiohd.c code are a number of switch/case blocks that do the work. I started by modifying the audiohd_impl.h header file and added a new AUDIOHD_VID_ALC660 entry which corresponds to a pci10ec,0660 device ID. This actually is attached to the PCI-express bus. Next, I opened the audiohd.c file and added entries in a few dozen places wherever I saw ALC880/882. Rebuilding the 32- and 64-bit drivers, and adding them to the /kernel/drv and /kernel/drv/amd64 directories, I rebooted then enabled the audio to play.

Open source nVidia Ethernet driver

The later Nevada builds and Solaris 10 update 3 all support the nVidia onboard GigE device (nge), but the MCP61 networking chip has an unrecognized device ID (pci10de,3ef). I tried to add this also to the /etc/driver_aliases, and run my update commands and reboot, but while the module loaded, it did not attach to the device and I couldn't get it to plumb. Seaching on the web, I encountered Murayama's Free Solaris NIC drivers and an alpha version of the nfo-2.4.1 driver. It does support a number of nVidia on-board fast ethernet chips, but none with the same ID. I attempted to try and attach it anyway, and the message logs told me that the driver attempted to attach but it failed to find it in the nfo NIC table. I looked at the nfo_gem.c file and found an array declaration for the nfo_nictbl[] that had a list of 15 devices supported so far, then cloned the last entry and added a 16th with the new device ID supporting 64-bit and JUMBO frames. I wasn't sure if that was the case, but I recompiled and copied these back into their respective /kernel/drv and /kernel/drv/amd64 and magically the interface came up when I rebooted and manually did an "ifconfig nfo0 plumb".

The next step was to access the network and bringover about 15 GB of files and sample audio which came over quickly and without any issues. This was to do further testing on the audio driver as well, which continued to play just fine.

All in all, not the most straight forward of installs, and not hands free, but it was relatively painless and largely made easy because of the community and open source.

Wednesday Nov 29, 2006

Solaris Install Experts - the New Chic

Quite a few years ago, I met up with this big guy at Tokyo University. His name was Ohno-san. He had a similar build like me. Big, round, husky. He road a big Motorcycle that really ate gas - sort of a status symbol as big Honcho for being one of the early contributor's to Japan's WIDE network. I remember his greeting to me - the once over stare and then grin and a modest handshake. Not a limp-fish, and somewhat out of character for a Japanese person. He said immediately that he could tell I was a techie/geek and I must know my UNIX systems, since most of us are pretty big guys with similar builds that look like we exercised our forearms lifting slices of cold pizza for most of our lives. But I lacked the facial hair and bad hair. Oh well for stereotyping folks like us. But I don't mind the status and any implied chic other folks -think- I may possess.

There seems to be a revival of that UNIX chic these days. It's not just inside the company. I see it with some vendor/partners and some of the academic/EDU folks. Solaris is cool again, and folks who can install it and fix other folks computers have a certain chic. Granted, we're a couple levels below the God-hood of a kernel developer, but we interface at a higher level with desparate folks in management and marketing who want to try out Solaris but haven't got a clue how to install it properly on their system.

Pepboys (Geek version): Computers Like Us - Colleagues Love Us

Solaris installation is a lot like car maintenance. Almost anyone could probably pick it up if they had the inclination to research a little bit and try to exercise some Emersonian Self-Reliance. The argument against everybody doing this (within a company or outside even) is the notion of Comparative Advantage. This idea says that everyone has their personal strengths and contributes in their own way. Folks have proven mathematically that Comparative Advantage allows multiple parties to optimize their productivity so all sides can benefit. And hence, we all specialize in our particular fields. And it's just my long-winded explanation of why there's a Chic associated with being able to install Solaris. Simply because folks like us are in demand. And so the ones who are less self-reliant will want to schmooze with us to defrag their laptops and partition a slice to install Solaris on it.

But it's still a lot like car maintenance. And a cornerstone in the mechanics trade has been that customers go back to the mechanics they trust, and with an evolving relationship, customers grow to respect the journeymen with lots of experience. Solaris installation ain't much different than doing tune ups. The more systems we get to work on, the more tips and tricks we learn. One of the things we pick up is what to buy, what works well, and what has good price-performance. Yes, it applies to names like Toyota and Honda too, but we're talking systems, and it isn't always the high-end chipsets that are well supported by Solaris.

In-Flight Across the Chasm

And whether that compatibility is a result of more community users hammering on the platform, or that the platform is more compatible and therefore more people are using it, it's clear that Solaris's recent popularity is coming from the x86 side. Sure, it runs on SPARC and we try to insure that out-of-the-box, Solaris just runs well and tuned on SPARC. But our SPARC customers just expect that and depend on it. The Solaris x86 side has been more mercurial. Instead of specifying specific supported hardware (which we Sell), we've had to provide an OS with broad support for many 3rd party devices. Linux has done a great job crossing this chasm of device driver support. Many vendors are providing drivers up front now for Linux. But the Linux kernel and headers are GNU GPL'd and the license can be somewhat severe for enterprises that have trade secrets. Some vendors have tried to play a risky game using shim layers in their device driver to insulate themselves from the GPL. But if they do it right, they end up with massive build environments for Linux because to support compatibility with the ABI, they need to maintain copies of kernel source, headers, and compilers. If they do it wrong, like a couple of embedded switch companies in Europe recently, then they may be forced to open source all their proprietary software on the device which also leverages GPL code or face a massive recall of all sold network appliances in the last several years.

The problem isn't so bad with GPL applications living outside the kernel. Applications are fairly safe if they only link to high-level libraries. But device drivers are kernel modules and live in the same process space that the kernel lives in and rely on GPL headers to compile. To get around GPL, shim layers of GPL code that then link to standalone proprietary object binary code seems to be the standard these days. But compiling a kernel module such that it has no errors in loading (e.g. the kernel taint statement), doesn't work well. The solution, if done right, is to compile a target object binary driver module for that explicit kernel with that explicit distro of Linux with that specific version of compiler, just as an added safe measure. But this grows into a support nightmare for vendors pretty quick. I support at least a couple of partners who support several popular distros each keep about 29 GB and 40 GB respectively in build environment (yes, Gig as in Billion). All this to prevent the taint message from showing up during modload. So it's not surprising that from where my group sits, more and more vendors are actually trying to contact us about driver porting to Solaris x86. There Solaris driver build environments are just a couple to several megs. Mostly documentation and make stuff. Not actual code.

And so, Solaris is trying to cross that chasm today. Fortunately, I can see the other side, and as Joerg Schilling predicted about 1.5 years ago, he said if we kept working on x86 at this rate, we'd cross that chasm pretty soon. Thanks Joerg for believing in us. We're not there yet and so we're still working really hard. But it won't be long now. Along the way, some of us have picked up some useful ways to get Solaris up and running on our systems.

Installation Tips and Tricks - a Summary

It's not all about the drivers. There some other pre-requisites to installing Solaris that should be covered. I'll be blogging more about the tips and tricks of trying to install Solaris in later blogs. But it starts with hardware choice and what not to spend money on. I'm into frugality, and it's often the cheapest, all-in-one motherboards and hardware that have support these days. It's more about selecting the right chipsets on a motherboard. I'm not a big-time gamer, so I don't go after the super high-end market. Plus, if I'm running Solaris, most likely, I'm doing simple home stuff - like audio, some digital camera stuff, some word processing, running some web sites and mail servers, firewalling the rest of my house network, etc. I make it a point not to spend more than $50 on a motherboard. If it's on sale, even better. And if it's a combo with all-in-one graphics, audio, LAN for under $75, even better.

It seems like the Optical media install is the de facto standard by which we judge usability. But more and more, I use network installs. It's amazing how many motherboards support PXE boot these days, and from a previous BigAdmin article, myself and a colleague tried to put a quick cheat sheet on how to set up a network install server and add more drivers to the netinstall image. But did folks know that they can pre-flash disk drives with Solaris on them and then re-configure them? I have a couple of servers at home and in the office that don't have an optical drive and don't have PXE boot bios extensions. I installed Solaris on a disk stuck in an install machine that flashed a netboot image onto the disk. And then I stuck it into the server box with no optical drive. There are issues of course with the boot-archive, the old device tree, etc. Linux does really well in this area with their Kudzu and admittedly, I wish Solaris were better. But installation isn't something most folks do that often. And so if instead of a single utility doing it for us, someone just had a complete set of instructions on how the darn boot-archive, path-to-inst and device tree worked together, then we might be able to reconfig the drive to the new hardware in say, less than a couple of minutes, it might not be so bad. Better yet, would be to script the process and have it as a command in the safeboot image. That's not there now, but something some of us are suggesting go in there in the future.

Lastly, once the system is installed, there's all the standard software that folks should stick on their system that isn't on there by default. There's the standard pathing for user shells that we should setup so commands are easily found too. There are also some nagging problems with devices that don't work well or at all. They don't impact the core Solaris kernel, but they may make the system unusuable (e.g. graphics is incorrectly sized, sound doesn't play, or some devices not functioning. Sometimes, an existing driver might actually work, only the vendor and device ID were not recognized in the OS database. Other times, the driver may be available as a free or commercial 3rd party just not on the install media. But their are quick ways to find basic drivers for network, audio, wifi and other components. These and other subjects will be topics in the next couple of blogs as I have time.

Looking back on the past couple of years and my experience with Solaris x86, I've gone through about 4 or 5 cycles where I've attempted to install latest current OS onto all my home and office boxes. At first, it was with mixed success, implying that a good fraction of the drivers were missing or so poor in performance and reliability that it made it unusable on the system. But lately, that's changed. Many do work for the low-end, budget systems in fact. Moreover, lately, the installations have been relatively easy. So it may not actually be so hard to achieve that high-level of fashion and popularity that being a Solaris install wiz, at least for a short while; most folks will still think installing Solaris is hard. But it's not something you need to share with everyone. And maybe you don't need to sit around all day eating cold pizza slices doing it to become a master at it. So you can have the sysadmin chic and still maintain the "girlish figure." The key is to enjoy this upcoming new year with Solaris, and sandbag a little when friends and colleague beg and grovel to have you install their laptop systems. Tell them you have a backlog and need more time. That might be true sometimes, but most of the time, I'd just take the fishing pole out and go fish a few hours while the install completes in about 30 minutes. I brought in a new custom fishing rod into the office recently. It's up on top of my locking bookshelf unit. Colleagues think it's just ornamental to go along with my fishing pic on the door. Hah!

Monday Nov 15, 2004

JDDAC at the Romberg Tiburon Center - SF Bay Estuary Monitoring

Well, how often does high-tech work in large scale enterprise computing fall right into a fisherman's habit? I had a nice conference call with a bunch of University and Corporate technologist folks yesterday afternoon about a project that needs a world class software architecture to provide real-time or near real-time data about SF Bay water quality and monitoring. The idea is to get data every 1 - 10 minutes from a wireless sensor grid that monitors all the estuary waters, and even middle and outside of the Bay and provide that data accurately and quickly to all users. Such a data would include real time lookups of conditions, as well as historical data monitoring for scientists, researchers and policy makers.

Sun and other companies started an initiative some time ago called Java Distributed Data Acquisition and Control (JDDAC). And immediately, one can see a strong business case for JDDAC. The challenge today in manufacturing, for example, is retooling costs for high-margin custom manufacturing. In other words, businesses can charge more money for some amount of work if they do custom work. But the costs of retooling are prohibitive and so customers tend not to order custom unless really necessary because it does cost so much more and manufacturers rarely provide discounts on any custom job unless one makes a very large order of things. The purpose of JDDAC was to standardize remote sensing and control technology so that businesses could retool and customize their manufacturing line to do more custom work at lower cost, thus meeting both customer demand and lowering business cost.

But then recently, some SF State U. researchers at the Romberg Tiburon Center that do Marine estuary research got wind of this initiative and quickly connected the dots. They've been funded by various gov't agencies to monitor Bay Quality. One of their major data consumers is NOAA. And some of their data is used in policy making for all sorts of things, from water diversion upstream, to quality standards evaluation of habitat for fish and wildlife. My details are only superficial and I'm learning more about this as I go along. But it sounds like a lot of fun.

I think I heard that they got a grant to design and build a new sensor grid architecture to monitor water quality. One of the design goals will be to standardize on software and hardware interfaces so a grid could be quickly and efficiently deployed anywhere (even exported to other sites around the world) to monitor many types of conditions - e.g. water clarity/turbidity, bio-fouling, mineral and other elemental concentrations for CO2, O2, pollutants, salinity, etc. etc. The system would need to scale as well. Starting with just a dozen sensor stations with sub-grids of sensors, the system needs to support thousands or more and do this in real-time. With such a plethora of sensors, having a standard interface and software to do data acquisition and control are vital if they were going to succeed.

Today, they have hardwired sensors mounted on some big concrete pilings along some shorelines. These are then connected phyiscally to computers and every 1 - 5 minutes, water quality data are gathered and store on disk to be processed later. It is non-real-time, and requires considerable human attention. The new system they want to build would be automated, wireless, and use standard sensor interfaces like JDDAC to collect data and control sensors. For example, one of the problems of taking, say turbidity measurements (murkiness of the water) is that algae and other crustaceans can foul the intake sensors. Their solution has been to put yet another remote control hardwired blower/pump around the sensor grid to clear the fouling prior to each measurement. More wires, more maintenance, more downtime. Ideally, by having a wireless and standard control interface, one control can proxy commands for another (much like USB or Firewire devices... you can chain them together). But there are obvious advantages to this technology for monitoring water quality, and when I brought up the concept of fish census, these professors all understood the challenge of figuring out how many fish are actually around in the water, data DFG/FGC need to assess stocks.

As an avid angler inside Bay waters concerned about Bay Water quality, I was delighted to see such initiatives for real data collection be funded and proceed at such a rapid pace. I was also delighted after a senior colleague of mine that leads the specification team invited me to become one of the participants for the software and network architecture. When the SFSU professors spoke about pier pilings and tidal currents and bio-fouling and piers, I could picture exactly what they were dealing with having spent some years now fishing these areas. And my colleague grinned because he knew that for this particular project, I was perhaps as eager to support this initiative as I was qualified from both the software and marine aspect.

And interestingly, word has come down from the Fish & Game pipeline that some folks down south in Monterey may be interested in a sensor station. Funding has occurred and I heard they are building a new Pier near the Moss Landing Jetty. This pier may only be primarily used for research vessel mooring and Monterey Bay science projects is what I heard, but that's early news and more details need to be researched. I need to contact some folks at CSU Monterey Bay to see if this is one of their projects and to ask if the might want to collaborate. I attended a great little symposium at CSUMB back in July or August and saw a lot of research posters from collaborative Universitys all over the West Coast and US. The symposium was sponsored by the NOAA (Nat'l Oceanic and Atmospheric Administration).

Who knew just fishing the Bay from pier and shore and along our estuary waters would have such beneficial and synergistic consequences? More to come...

Friday Nov 05, 2004

BBBQH2 for Techies

Big Barbeque HowTo for Techies

Too busy to blog!

I stopped blogging for quite a few weeks now. Just too busy to even get my head out of work mostly. I was one of the lead organizers for an annual internal technical training and conference and headed up IT, registration, audio/visual and food for the week-long event. Our group of about 200 people worldwide has undergone some major changes in the last few months. A complete re-organization. All new VP, Sr. Director, and Director that I report to now. I'm in a new position too, doing similar techie stuff, but under a slightly different charter and for slightly different vertical market segments. Morale was pretty mixed at the beginning. But people seem to have cheered up after the conference where they all had a chance to enjoy a few cold drinks and some good food. I must say that the week long event couldn't have been a more perfect venue. We had speakers like Andy Bechtolscheim, Scott, and Jonathan do keynotes for us. And they didn't disappoint. I felt it a personal duty to serve food that was at least worthy of such a great program.

Hosting a week long event can be tough. We had great help from our organizing committee. Content was led by Matthias, a German colleague out of Sun's Walldorf office. And logistics for budget and travel were handled by logistics expert, Don. He pretty much shaved travel, lodging, and ground transportation costs down to around than $1000 per person for the roughly half the participants coming from 13 other countries and 5 major areas around the US. As we all know, budgets can be tight, and directors are given very limited dollars for group team events these days. Times were more encouraging this quarter because those funds were more available. But still, the overarching directive was still to watch our costs. Still, I actually had the opportunity to lead 2 major events - a BBQ and a Luncheon and act as primary cook for a third (Fajitas & 'Ritas). The Fajitas and Margaritas party and the luncheon were employee funded with minor support from the company, while the BBQ in the park was picked up completely by the company. The costs? Well, the Wednesday BBQ for 160 people cost just under $5/person. The Thursday Fajitas and 'Ritas (Margaritas) Party for 69 people cost about $5 - $10 each depending on whether a participant imbibed, and the Friday luncheon for 180 was just $3/person.

Catering to my Colleagues

First of all, I love to cook for people and I love to grill. It goes back to my academic history. I was pretty active in student societies back at Cal Berkeley. In my senior and 1st grad school years, I was BBQ'ing almost 3 times a week for various student groups. These were mostly morale/team building events that were revenue neutral. But the joke in the College of Engineering was that I was the master of porous media Heat Transfer - the porous media being charcoal briquettes, real wood, or fake crushed volcanic stuff over a propane fueled flame. There was just something about grilling outdoors that put me in touch with my primal roots. Beat the tom-tom drum. Sing Kumbaya kinda stuff. Bond with nature, yadi yadi yadi.

Wednesday Afternoon BBQ

Getting back to the Wednesday BBQ. There are lots of ways to cater a large outdoor BBQ party, but quality and quantity can vary drastically depending on the help, menu and venue. And I can't overlook good help. Some of our group admins have tremendous logistical experience with events and they showed up and helped out. Our of our stars, Kim, is 8 months pregnant (almost 9 now) and she showed up early with 80lbs of ice! We also had 3 engineers show up early and post signs along the roads and park to guide folks to our destination, plus setup the chow lines and place all the food for optimum parallel processing of hungry eaters as they came through. The Directors showed up and ran all the slice-n-dice food prep operations for hors d'oeuvres and other items. It was real team work.

But for the best help, my recommendation would be tough for anyone else to follow, because I would suggest that you marry a spouse as helpful as my wife. She really pitched in with the shopping and the food prep. We were both up late the night before, first shopping and then prepping and marinating, and then up again at 4:30 am in the morning before the BBQ to prep some more, cook some of the dishes and load up the vehicles, etc.

BBQ Risk Management

The way I organize a BBQ is all about risk management. There's environmental risk, like the weather not cooperating. And there is operational risk. Certain operations can be delegated, like setup and cleanup. But some operations, like procurement, must be controlled by and given to a single party or we risk having the standard asyncronous potluck syndrome - too much beer, chips, cake and spoons, but no plates, forks, diet beverages and main dishes. Then there's the health risk. One challenge here is sanitation. Another challenge is storage. How many folks can handle literally 1000 lbs of liquid and then deliver it the day of the event? What about a hundred pounds of raw meat? How many folks have food-service experience? While the risk of e.coli and salmonella contamination are fairly low, with a company event, 200 high-IQ engineers could be made pretty unproductive for quite some time if some outbreak should occur. Having worked in a restaurant for 4.5 years and done lots of procurement, usually, that's a job I take personal responsibility for. This leaves Menu and Venue left as action items for others to decide.

A Possible Menu

Planning an event is almost like doing Product Life Cycle (PLC) Management, only the time frame is a lot shorter. You have an approval committee that has managers and architects, and multiple people are tasked with action items (in our case - a BBQ Event) and we draft half-pagers and one-pagers outlining our proposed Architecture or Menu. Cost/Benefit ratios are analyzed, small separate committees and product teams (cTeams and pTeams) form around separate tasks and we set milestones. As milestones come and go, we track progress, and finally, come GA, people pull all nighters to make sure stuff ships on time. In our case, it all started with my proposed menu. Since I took responsibility for procurement, the group thought it only appropriate to give me the privilege of doing the menu (not to mention it would save a lot of hassles all around to just have me do it.)

For openers, I was planning on smoke salmon on french baguette slices with olive oil, pickled capers and dill weed. A classic hors d'oeuvre, but somewhat labour intensive. Others suggested that we simplify and modify this and provide individual bags of variety Chips - a la Frito Lay-style. We thought about this and it was decided that we do both. The Directors we given the assignment to fabricate the smoked salmon hors d'oeuvres and they did an okay job, although they skimped on the smoked salmon and only used up one pack of salmon and left about 2 baguettes. The next step was to provide some first course (Prima Piatti) options. Usually, in the states, this means salad. We decided to offer a tossed mixed salad (insalate miste) but with a Ranch dressing and small tiny olive tomatoes plus optional Potato salad. Both are extremely affordable and come mostly pre-packaged off-the-shelf. Next was to provide a Carbohydrate option prior to or with the main (second) course. We considered the sensitivity to all the Vegetarians and came up with two optional items. Garlic Bread Forte (strong garlic bread - Bamm!! with like lots o' Garlic), sweet dinner rolls (for folks with the Vampire retro virus - intolerant of garlic), steamed Basmati rice seasoned with Tumeric, Saffron, Cardamom and Raisins, and lastly, spaghetti with a simple marinara sauce. The main course would be an assortment of grilled boneless beef rib steak and boneless dry rub seared chicken thigh meet, italian sausage and Cajun style Hot Links and Portabella Mushroom Burgers. For desert, we planned for any bulk bags of seasonal fruit like apples and oranges. And finally, for drinks, we went with a full non-alcoholic beverage list of all individually bottled or canned sodas, diet sodas, sparkling water and plain bottled water sans gas.

Note on Vegetarians/Vegans

Vegetarians are categorized as folks who do not eat meat, but may eat eggs, cheese and other diary products. Vegans do not eat any animal products whatsoever. In any high tech company doing software these days, you can expect about 30% of the people to eat vegetarian-only and a small 1% or so to be Vegan. These are mostly the Indian engineers, and we need consider their needs as well. This means purchasing 100% durum wheat pasta (no eggs) or making a number of traditional rice and lentils dishes that do not contain butter, milk or eggs. And in some cases where stock is required in a sauce reduction, we need to use vegetable stock only. Since such dishes can be appealing to both meat eaters and vegetarians, it's important to make enough for everyone. That gets a bit tricky sometimes because some dishes may be labour intensive or costly and thus limited. For example, 2 years ago, at previous BBQ, I had a stack of portabello mushroom burger/steaks simmering in a marsala wine gravy that was enough for 30 or so vegetarians. But the mainstream folks loved them so much, it got depleted in less than 3 minutes and so quite a few vegetarians went without their main course. It still bothers me that I didn't prepare for that outcome and some folks went hungry. The obvious response by the omnivores was a bit callous - "It's BBQ Darwinism - Omnivores rule!"


This year, we chose to host this event at Sunnyvale Baylands Park - a municipal park that was both nearby and reservable for a fee. Another good thing was that this park had defined hours when they opened and closed, and charged all persons for parking. Such parks are usually fairly reasonable venues, costing just $300-$500 for site reservations, and discounted parking for groups on the order of $3/vehicle. By carpooling, the parking fees can be further reduced. We could have found a cheaper venue for sure that was only $50 to reserve and had free parking. But we chose the paid route this year for several reasons. The first is that the fees help to maintain the park in optimal condition. Free public parks tend to be more run down and lack clean facilities like bathrooms, electrical power and potable water. Second is that the defined hours and cost inhibit transients - especially the mentally unstable - from taking up residence in the park. Last year, when we hosted a previous BBQ in Milpitas, I ended up having to call Police when a young man in his early 20's on a BMX bicycle decided to crash the party. He began reaching down into his pants in front of a group of us, including our V.P. We've learned since then to be more selective about venues.

Heat Transfer

Make no mistake that in a BBQ, the key to cooking is mastering the Fire. The type of flame one targets really depends on the type and size of the BBQ. Typically, public parks have two types of BBQs. For single family and small picnics, they have a small metal box welded on top of a pedestal. It has a 1/2 sq. meter grilling surface and slots on the sides that allow the grill height to be adjusted. For larger group BBQs, many parks have masonry pits. These are often about 2 sq. meter area rectagular pits recessed inside a waste high masonry structure. A heavy wire mesh screen on a chain and pulley allow the entire grill to be raised or lowered over the pit via a long crank handle or cog wheel.

I know some folks who bypass public BBQ pits and lug their own grills because the public units are either broken or filthy. While this is sometimes necessary, often, a stiff wire grill brush plus a very hot fire are enough to rejuvenate even the most gnarly of public grills. Fortunately for us, the Pits at Sunnyvale Baylands are in premium condition. They are the big rectangular recessed pit types and each have potable water on one side of the grill and an electrical outlet with 110 VAC power on the other. Each reservable area sports 5 such pits and its own prep table, picnic tables, and garbage and recycling receptacles. Truly one of the finest public parks facilities I've had the privilege to cook at.

For public pit BBQs, I use a 3 stage fire that takes about 35 - 45 minutes to get ready. It starts with a modest amount of charcoal briquettes on the bottom. Any brand will do. I only use this as catalyst to pre-heat the pit. About 5 lbs shaped into a conical mound works for small pits. About 20 lbs (10kg) are more appropriate for the large pits. Note that the mound should be shifted over to one side of the grill. Which side? Well, take a sense of the wind direction and observe over a few minutes. Then position yourself on the side of the pit where statistically the wind is mostly at your back. This is critical when you cook so fumes from the smoke don't overwhelm you. For right handers, the main fire should be primarily on your right. Raw food starts on this side, and as it cooks, you flip it and move it to the left. Lefties should start fires on the left side. Don't worry if you don't cover the entire pit. You only need about 1 sq. meter of cooking surface per 75 people or so. You want some areas without direct heat underneath for food warming purposes but not cooking.

Next, squeeze a generous amount of charcoal lighter on the mound of briquettes. About 25cc/kg (about 0.4 fl. oz per pound of briquettes). Let it soak into the coals. Then light the fire. Wait about ten minutes and when the primary fuel has burned off, toss about twice equivalent weight of mesquite chips on top of the charcoal. If you like, you could switch and mix 50% mequite, and 50% hickory chips. You can buy BBQ chips from most BBQ supply places. Walmart has this near the garden section in 18lb bags. The key is to get blocks of chips that are about 5 cm in largest dimension but no smaller than 2.5 cm in smallest dimension. Technically these would be more like blocks than chips. Tossing the chips on the coals can snuff the flames out or reduce the flames initially. Don't worry, if the charcoal has been ignited properly as per the instructions above, it will reignite the chips in due time. If you're in a hurry and need to accelerate the process, get a piece of cardboard or a large, stiff paper plate and fan the coals for a few minutes to get oxygen into the center. This makes the coals red hot and when you stop, you should see a light blue flame shoot up as the adiabatic flame temperature really hits the 500 deg C mark the chips ignite readily.

Note that on rainy days where the humidity is in the high 90 - 100%, BBQs are very hard to light and start. Increase the fuel amount, and protect the coals and wood from getting damp. This is critical or else you make need to take much longer to fire up the coals. If rain is coming down, well, hopefully, you had a contigency plan to cater the food indoors and bake or broil the items in an oven. (But it doesn't rain often in October in California. We had 26 deg C weather most of that day. Just fantastic.)

Once the wood chips are burning well, which should be about 10 minutes later, take a fireplace poker or long sacrificial stick and spreadout the coals and chips more evenly. I then place oak or almond hardwood logs on top of the coals and let these catch fire. Again, about 2 or 3 times the amount of original charcoal by volume. I let this burn about 10 minutes more, until more than half the surface of the logs are white charred and glowing. It's now time to lower the grill above the fire. You should feel a tremendous amount of heat rising up as you get close, since the walls of the pit effectively insulate all heat loss from the side, all the heat is directed up. The heat may be unbearable. Usually, if I reach over quickly, just above the pit, the tips of the hairs on the back of my hands and forearms will burn. I highly recommend at this time to you have a small 2 - 5 gallon buck next to you, filled with cold fresh water for keeping your hands and arms moist. The evaporation of water will protect the skin up to 15 seconds as you reach over to manipulate food. But once your skin goes dry, you can receive 2nd degree burns easily roasting your hand and arm over the fire.

Note: For our event with 160 guests on the RSVP list, we expect about 15% more party crashers (because of the free food and proximity to our Santa Clara Campus). To accomodate such a large crowd, we decided on two pits to double the cooking surface. This also made sense since would could now toast a lot of vegetarian items on one grill with one set of utensils without fear of cross-contamination with meat and/or meat juices from the other grill.

Cooling down the drinks

Initially, such a fierce fire is not really suitable for cooking. Food would char instantly and the inside would still be raw. However, this heat is great for sterilizing and carbonizing any deposits on the grill. I usually let the heat do its job at this time and come back later with wet hands and a stiff wire brush to clean it before putting on my first items. Around this time, I prepare drinks, a thankful task because it gets me away from the pit and doing something that involves cold ice.

Lots of people think that with BBQs, the biggest task is the preparation and cooking of meat items. In fact, that's probably not true. If you deal with 18 - 29 year olds, the average male will eat only about 3/4 pound of meat (wt. before cooking) at any BBQ. Females eat just under 1/3 rd pound of meat. For the group 30 - 45 years of age with , reduce both down by 33%. That means for a group with average age of 32 years and 3/4ths males, and 1/4th females, we can expect 200 people to eat less 100 lbs of meat type items.

However, if the weather is warm, which it was 3 weeks ago - you can expect each male to consume close to 2.0 liters of liquid during a 5 hour period. Not all of it is actually swallowed, but people tend to waste about 25% of their beverages because they don't finish it before getting a new one, or they confuse it with someone else's beverage and then don't want to drink it. So on a hot day, we need to haul a lot of drinks. And the amount could even be more if most folks decide to participate in any physical activities like soccer (football), frisbee golf, etc. For 200 persons, this means having about 5 lbs of liquid each or about 1000 lbs of drinks. Try hauling that in a normal vehicle or minivan. It's quite difficult. And for most employees, such a monumental undertaking of hauling 1000 lbs of liquid is just not practical. That's where my wife and I come in. I drive a small Toyota Pickup. She drives a Sienna minivan. Between the two of us, we can haul about 1 tonne. This is sufficient for most warm weather BBQs upto 200 persons.

I still haven't gotten down to describing how to cool down the drinks. And this is a logistical trick. Basically, in warm weather, ice melts quickly. And it takes up volume so it's difficult to store. The key therefore, is to buy the Ice the morning you plan to use it. Or better yet, if someone on the team has an Uncle that owns the largest west coast Ice distribution company, even better. In fact, Kim, our expectant admin has an Uncle who has an Ice company, and we were able to get high quality (i.e. very cold and small chipped) ice in large quantities, and we could get it shipped to our site at exactly 11am prior to 12:30pm serving time. We needed a total of about 100 lbs of ice for the afternoon to chill about 1000 lbs of drinks.

With 3 bushel-sized large plastic drink tubs and two large capacity ice chests, we were able to load about 1/2 of all the drinks loosely into the tubs and put about 2/3rds of the ice into the ice chest. We then took ice bags and spread the ice over the drinks. The solid-to-solid contact of ice to beverages has very low rate of thermal diffusion and low Nusselt number. Such a tactic would take 2 hours to chill down drinks. To reduce time down to 15 minutes, a trick we heat transfer guys use is the 0.4 deg C triple point for water/ice/vapour mixtures. Basically, you can be assured that if you have a water/ice slurry, the temperature of the water will remain pretty close to 0.4 deg C as long as solid ice is still present. A beverage in contact over its entire surface with such a cool slurry will cool down to just 5 deg C (frige temp) in just 15 minutes or less. So the key is to add water to the tubs to immerse most of the drinks and then pile on the ice and somewhat fold it into the water to make a slurry. The Nusselt number goes way up as the liquid water not only increase surface area, but provides a fluid for much more rapid convective heat transfer over the ice alone.

Where's the Beef?

Good meat is hard to come by these days, at least at a price that can feed a hundred or more people for cheap. Thank goodness for Costco. The perfect store for these occasions. It sells a boneless beef chuck rib meat that comes in strips about a foot long and 2 inches in diameter. It's usually got a square cross section which is nice for grilling because it won't roll around, letting you keep track of cooking time per side. The meat is nicely marbled but somewhat tough along the grain with a few sinews. For just $3/lb, it's a bargin. It's great for stew meat and much more tender after a long cooking period. Plus it has great flavour being next to the rib. It is far more moist than a tri-tip or skirt steak at $4/lb, both of which get tough and dry. However, there's a reason why the meat is cheaper. It isn't shaped in anything close to that of a steak being long rather than flat and round. And it can be a bit tough just out of the package without treatment. Also, as far as steaks go, it isn't designed to be served as individual slabs. Instead, it needs to be cut across the grain into slices about 3/4 inch thick and served as small steakettes.

But, there is no doubt this meat can rival the flavour and satisfaction of a ribeye. The key is preparation. Most strips have one side that has a thick, attached membrane that was stuck to the ribs. This needs to be trimmed off and any excess fat removed. This comprises no more than 5% of the weight usually. Then the meat needs time to marinate and break down. There are two schools of marination - one that works and one that doesn't. One school recommends using enzymes to break down muscle fibres and protein, usually with some tropical fruit juice. But as steak aficionados already know, this destroys texture and makes the meat gritty.

The other way is to use some type of chemisty, usually with mild pH differences, and sharp changes in salinity or chemical concentration to drive cellular breakdown or dissication which is more akin to aging. This is exactly the same as brining meats, which makes them retain more moisture during the cooking process, but while in the brine, the tissues soften quickly. Depending on the kind of marinate and the degree of saltiness, temperature and other factors, you may only need to marinate for a few hours. But I like to go easier on the salt and marinate over night and kick in some other flavours. My wife discovered a brand of Korean BBQ sauce that isn't the standard tangy/spicy one used on Kalbee. Instead, this is similar to a Japanese style of steak marinade. Very low viscosity, but very dark. It seems to have some molasses and dark sugars, rice wine, salt, dark soy, and other ingredients. It isn't as salty or sweet or viscous as other sauces. This works very well on chicken and beef, and it especially works well with these rib steaks. A large zip-loc freezer bag with about 5 lbs of meat and about 1 cup of this sauce mixed with 1/2 cup of water seems to do the trick. Since the meat comes mostly defrosted but somewhat still solid in the core, a space saving trick is to use an ice chest and store the meat there overnight. The meat can melt and still stay as cool as the frige, and marinate more quickly.

Coq au Vin

Chicken and white wine just seem to get along extremely well when combined and cooked, releasing aromatic esters. My wife does an oil-n-spice mixture of light olive oil, Montreal Chicken Seasoning and some italian herbs and slathers this over boneless chicken thighs. Generous portions of chicken get shoved into zip-loc bags as well and before throwing them into the cooler, we pour a 1/4 cup of a good white wine, like a $2/bottle Charles Shaw Chardonnay you can get at Trader Joes. It takes less than an hour to process about 60 lbs of chicken this way, and the results are about a dozen bags of ready-to-grill poultry that takes about 15 minutes per batch.

Meanwhile... the BBQ Pits have settled down

After a good 45 minutes, the BBQ Pits settle down - no more big flames and smoke. It's very white and red and very hot. About 20 minutes before the bulk of the people arrive is when I usually take a final pass at cleaning the grill, and then start placing meat over the fire. The fire is not always uniform and there are always hotspots. In addition, the first 30 minutes or so, the flame may be so hot that you really need to watch the meat closely and flip it a lot. In fact, keeping the grill an extra 6 inches higher can allow you to keep up with the flipping. The chicken may have lots of oil residue and it can drip into the fire and cause flare ups. And the fatty parts of the beef can drip as well.

Keeping hands and arms wet and moist is critical early on. Also, having a long metal skewer, like a fireplace poker with a sharp hook on it is better for reaching across to the middle of the grill without sticking your arm over the pit and getting it roasted. Remember to use two sets of tongs and flippers. One for raw, and one for cooked. I make it almost a religious practice to only handle food with the raw food tongs until such time that the exterior is cooked. And then all the handling is using another set. This is the reason for having the fire on one side of the pit. The raw food is placed on my right side, then moves left as I flip it. As I take it off the rack on the left, I switch and use the cooked tongs. For most right handed folks, flipping by rotating the wrist counter-clockwise is easier and hence you want to start out on the right side and flip moving left. Chicken is done in about 15 minutes if it has been kept flat. If the meat is balled up, you will need to check the inside. Sometimes, it can take 25 minutes or more. So it's important to lay the pieces down flat originally on the grill. The steaks take about 20 minutes to cook because they are thicker. Individual chicken pieces can each represent their own serving. But the foot long strips of rib meat are better off being cut across the grain into 3/4 - 1 inch thick slices. This steak is still tender and juicy even medium well, so it is not necessary to have different batches to accomodate those that like their steaks rare or medium rare. Also, for such large parties, it's usually best to error on the side of well-done.

I don't recommend slicing the meat immediately. I prefer to have an aluminum tray or two trays (one for chicken, one for beef) at the colder end of the BBQ to put cooked items into. it's still warm for sure. But just not so hot as to continue cooking. Here, meats can stay warm, but rest for a few minutes. But sometimes, you have no choice if a queue forms waiting for fresh meat off the grill. However, if you did have time, ideally you'd let the strip steaks rest for about 5 minutes and then slice them. This pause gives the meat time to reabsorb some of the juices that were boiling inside and trying to move up through the meat driven by the intense heat.

The hot fire early on is great for cooking fast, and hopefully it satisfies the bulk of customers coming to feed. As the heat reduces, you can lower the grill closer to maintain the cooking capacity, or leave the grill at the same level and reduce your urgency by which you watch the food. Then maybe you can change shifts with an alternate grill-master and schmooze with others.

Sausages, Portabello Patties, Garlic Bread, etc.

Cooking sausages is the same as cooking other meats. The key is to use tongs and not a poker. We don't want to puncture the skin on a sausage and thus allow the juices to escape. Sausages are best left till after the fire is a bit cooler. This is because they tend to have high fat content and can catch fire easily when the grill is extremely hot. It's better to slow roast them. Sausages are ready when the skins get translucent and split or holes start to form and allow juices to escape under pressure. This means the center has reached a pressure above 1 atmosphere, and thus inside the sausage, with the salinity factors, the temperature must exceed the boiling point of water. It's a good practice to leave a sausage that is squirting on the grill for another minute to ensure that the interior is evenly cooked. Afterwards, I also like to slice sausages into bite sized chunks. Those who like it can take more. Those who don't, won't waste as much.

For vegetarian burgers, garlic bread, and other non-meat items, the grill will cause a major reduction in moisture. Garlic bread, because it will toast so quickly over the hot coals, this isn't a real problem. But with any type of vegetarian patty, they can turn into cardboard - literally - they will taste and chew like cardboard. What can work is to pre-fill an aluminum tray with a rich, vegetarian broth and red wine reduction with some type of tomato-base sauce. Off-the-shelf BBQ sauces that are thick and viscous can work here. Taking one part sauce, and one part stock and adding a teaspoon of red wine per liter of total liquid can make a very nice mixture. All this can be placed and mixed inside the aluminum tray, then placed directly on a hot part of the grill. The sauce will simmer after about 10 minutes and begin to reduce. As you finish grilling the Portabello patties, you can then dump them into this sauce and preserve the moisture and add a lot of flavour and also keep them hot. Folks who serve themselves will then have a very nice, moist and rich sauce taste on a vegetarian product. The only problem though is that the non-vegetarians may like this too, and deplete your resources quickly. I came prepared with 2 x 28 packs of vegetarian patties.


Clean up is always a problem. If everyone simply picked up garbage and/or recyclables around them and put stuff in its place, the rest of the things would take care of themselves. But lots of times, folks abandon their messes and others need to pick up after them. I usually travel with all my service ware, supplies, cutting boards, knives, kitchen supplies in large plastic tubs with lids. Dirty stuff all goes into a special tub and gets taken care of at home, and I keep a separate one for small leftover stacks of plates, cups, and other paper products to be used for the next event. Packing and gathering leftover stuff takes less than 15 minutes usually. But I try to allocate 30 minutes for final walk through to clean up the place, and another 30 minutes for sorting garbage from recyclables. It's amazing how many folks will be careless about throwing away a recyclable into a trash recepticle when the recycling bins are just 2 feet away. But allocating time for clean up ensures that next time, the park maintenance folks will welcome you back. Oh, having rolls of paper towels, a pack of food service plastic gloves and a jug of disenfecting wet wipes are pretty much de rigeur for us BBQ warriors!

Total bill for the event came to about $861. The total number of participants could have been about 175 or so. It's hard to say. It sure was a LOT easier hauling stuff back. I was left was just a few bags of chilled meats and supplies, and less than a dozen bottles of water. At $5/person, that's good morale boosting, I think.

Thursday Sep 16, 2004

Triple Boot Laptop, Finally

I've finally gotten off my lazy derriere and put together a Triple boot laptop. Surprisingly, the hardest part of the whole thing was installing the new, larger hard drive. My primary laptop is a cheap $750 Toshiba unit with 20 GB disk that I picked up about 3 years ago on sale with some big coupon, plus it had a $50 rebate. It was one of those non-expandible FRUs (Field Replaceable Units) with drives that couldn't be upgraded. But I purchased it at the time thinking otherwise.

Being a geek entitles me to feel overly confident about any piece of consumer-rated computer hardware, even though I may have no clue what the heck I'm getting myself into. And as Murphy's Law would have it, you bet I spent a good 3 hours the other day poking around the clam shell before giving up and calling Toshiba's Service Center to get a quote on how much it'd cost to install a new 40GB drive I'd supply. I got a pretty prompt reply: $89 labour. Or for $129, they'd transfer data to the new disk and extend warranty on the whole laptop for another year.

Uh... price was way too high. 'Okay, time to really put some brain cells to use and figure this problem out,' was what I thought to myself after getting the price quotes. As luck would have it, I did find out how to open the case, plus I didn't break anything doing it. The secret was to attack the screws holding the top LED panel cover over the LCD hinges. There are 4 screws in total, two 6 mm long and two 3mm long with mini phillips head. With a flat, thin prying blade and the LCD panel bent back until it was almost fully open 180 degrees, I could pry and pop of the LED cover plate. It revealed 6 more screws that anchor the keypad and top half of the clam shell to the bottom. I removed the keyboard as well, and then turn the laptop over and remove the dozen or more long 18mm screws around the edges of the clam shell. 4 more large screws anchor down the LCD panel, which I remove as well. Then there are 3 more in the mid-section that hold the clam shell together. With enough screws taken out, I could open up enough panels to gain access to the floppy and hard disk drive bays. With almost everything taken out, I decided to clean the built-in mouse track-pad and buttons. In total, I think it was over 48 screws, or at least felt that way; just keeping track of all pieces and screws was pretty hard. I definitely don't recommend you do this with young children around.

I actually got pretty lucky re-assembling the unit. The first time, I only had 2 screws left uninstalled. We all know that having extra screws pretty much sucks; you never know if it's some critical structural or disk drive screw. But I looked at it from a positive point of view. I could have forgotten way more screws. I took the laptop apart again, and this time found where the screws were supposed to go and filled 100% of the empty screw holes.... well I think I filled them all...unless the kids took some of them..

With the hard drive in, installation of software is pretty straight forward. I had to install WinXP first. The WinXP software is a recovery only type of install. It formats whatever disk is in the main drive and then creates one monster partition and sticks WinXP on it. The Win32 install makes a lot of assumptions that it owns the laptop, and thus, doesn't have to care about installing other OSes or playing nice. It will overwrite and hose everything else on the laptop.

But that's not a problem. The Open Source community has come through with a SysRescueCD image that contains a mini Gentoo Linux distro and nifty partitioning utilities that come on a bootable CD. Size is about 110MB for the iso image. Two included utilities, QtPartEd and NTFSresize are very helpful and low cost for resizing FAT-32 or NTFS partitions.

To make a triple boot system, I needed to first shrink the disk slice used by Win32/NTFS, then configure the remaining space into three slices, one for Solaris and the other two for Linux. Solaris needs to install before Linux for several reasons, one of which is that I want the Linux Grub boot loader to boot all three OSes.

I used QtPartEd to make these three additional partitions, okay technically I have to do 4 operations. First operation is to create a primary Solaris partition adjacent to the NTFS slice. I don't need to further sub-divide this slice for Solaris swap and Solaris UFS because Solaris will do it for me during the install and stay within its slice boundaries. The rest of the disk, I make a big extended partition. And inside the extended partition, I make two slices, large Linux ext3fs and small swap slice. Just the additional two big slices were enough. QtPartEd doesn't have a way to create and format Solaris UFS partitions in its menu. And after a first glance I wasn't sure what to do. But I recalled that Solaris x86 partition IDs are the same as Linux swap, 0x82. This can present a problem when installing Solaris and it sees Linux swap. It will try to use them and mount them as Solaris primary partitions, possibly installing on them. To avoid this possible snafu, we create an extended partition and put all the Linux partitions inside. The Solaris installer won't look inside the extended partition. So Linux swap inside the extended partition is safely hidden from Solaris.

Installing software was pretty straight forward. All the distributions came on CD, so the standard mode of sitting around and inserting the next disk are in order. WinXP recovery has 3 disks for my Toshiba. Solaris and Linux each have 4 total for the full distribution with documentation and multiple Locales. Installation time was about an hour for each. And WinXP and Linux each have over 500 MB in updates and additional software to download and install, such as service paks, updates, additional browsers, email, office utils, etc. Solaris 10 has yet to ship and so doesn't have a big list of updates, it may suffer from drivers or lack of them. Hopefully driver problems won't impact folks out there. The graphics and network drivers are often the culprits and the key is to bypass the graphical install and move on and fix later. I'm impressed because Solx86 has come a long way on x86 drivers in the last 3 months. This Friday Sep 17, Sun's Alan Duboff our Solaris x86 Technical Ambassador, will host another install-fest internally. I'm eager now to try the Solaris OS update features on the latest builds.

Some interesting observations about the other installations: WinXP Home took about 5 hours to fully install. The first thing it did on boot was to notify me of extremely urgent OS updates that were super critical to the health of the computer. To some, this is a great feature, but to me, it was kinda scary. I felt quite vulnerable during the first boot as I madly scrambled to download the patches and then startup the Network control panel up to block all further incoming connections from outside. I had to fork over money too. It's $50 to download Norton AntiVirus 2005 and get it installed. But I guess I didn't want to wait to head out and buy some OEM bulk copy for the 2004 version for $10. Definitely boot a fresh install of WinXP from behind a firewall/NAT router. Make sure it's the only Win32 machine running on the private net. Don't connect it to even a LAN that might have viruses active because if you haven't had a chance to reboot with network on to set the firewall one in XP, then you could be infected by some RPC virus right off the bat. But I guess that's the cost of doing business with Win32. In retrospect, the download of XP SP2 took so long, maybe I should have headed out to buy the OEM virus protection.

With Linux, it would have taken quite a while, except I archived updates on a server at home with lots of disk. There were 400 MB in RPMs and this includes new multimedia packages. I use YUM, the YellowDog Linux Update Manager, to download these updates. It's quite brain-dead easy. But the tip here is to run YUM once configured to download and preserve RPMs after installation (rather than blowing them away) and then archive the RPMs and install other machines with updates to save on bandwidth and time. That's why it pays to keep all desktop systems in a home or small business updated to the same revision of OS. By default, the Windows Updater obscures where it puts temporary packages that it stores for updates. Each machine therefore has to run its own update. I guess it helps eat more bandwidth, which may or may not be a good thing. On the flip-side, Win32 is so insecure, would I really trust a home machine to store archived updates? Not if my Dad was a user. He has all the spyware and virus blockers and anti-spam filters, yet he still gets about 1 nasty spyware per month and it's saved and runs out of the IE cache. I haven't upgraded him yet to Firefox because I never have the install media when I'm over at my parents' house and they use dialup so downloads are out of the question. Again, it's a huge cost of doing business, even for home users, on Win32. If I weren't around to provide tech-support, he'd be toast and so would his online stock portfolio.

Well, I've got my Triple boot system. It was pretty straight forward to do. Just a few small procedures to follow and I now have a small buffet of OSes. With a 40 GB drive it's possible to create a shared user-data partition in extended space and install quite a few OSes in smaller, 4 GB slices which then all access that shared slice for home directories. I've debated configuring a laptop this way. Sometimes less is more. Sometimes more costs too much. Sometimes paranoia sets in about the integrity of that data partition if the wrong OS were to boot and mount it. One thing is for certain. In a year, I'll revisit the whole decision again and probably re-install something else.

Sunday Aug 22, 2004

Getting more Nines on Wi-Fi Availability

Reliable, Available and Scalable - RAS - is almost like a mantra we mumble to ourselves in Enterprise computing. You've all heard the story of the renovated University building at Stanford or Berkeley where some Sun box running as a department mail server was walled up behind sheet rock accidentally, and it remained that way for 5 years and kept running until some decided to upgrade the system and couldn't find it anywhere, but they could still ping it.

But I had a little related "thank you" email come across from a neighbour up in British Columbia where my vacation home is. It's half way up Hwy 99 toward Squamish/Whistler next to a fancy golf course and next to the water. Well, about 1/3rd of the neighbours are residents from the States. Last Christmas, Telus (a.k.a. BCTel) finally dropped some fibre down from Hwy 99 (which they laid 18 months ago!) into our complex. We all got broadband at a clean 1.5Mbps down/640kbps up. I planned a Christmas/New Year's trip up at the time just to get the network up and install some wireless. That way, I could kick back out on the water fishing and still be logged into work. It always seems like the fish bite better when I'm not paying attention to the rod, so surfing the net just invites more hits.

I keep the Wi-Fi network open to the neighbours and put up two access points on opposite sides of my house. The units also sport high gain attennaes that push the signal clearly out to Hwy 99 which is 1/2 km away. So folks on the Golf Course should be able to get clear signal as well. Also, I set one AP to channel 4 and the other to channel 10 to support more users with fewer collisions.

My US neighbours just love the WiFi. Most head up there to ski and golf several times a year. And they've gotten used to the very reliable and available wireless and it saves them the monthly fees and hassles paying for their own connection, and insuring that it's up and running and secure when they arrive up in B.C. every couple of months. I simply donate the bandwidth. It's a small cost compared to what I pay for down here in the Bay Area for my DSL and the signal is so much cleaner up there too - as if I was just next door to the C.O. Plus with the exchange rate for $CDN, the price is even better.

So one neighbour wrote me a pleasant thank you email that expressed some amazement. On a recent trip, they had a power outage in the complex for about 30 minutes (a frequent event that happens once a month or so). But amazingly, they had laptops up and running, and the network never waivered. He said that he almost came over and knocked on my door because he swore I must be up there in the house working and maintaining that Wi-Fi connection because it's ALWAYS up. Even during the power outage. Whatever I was doing, Kudos.

I smiled when I read that. I guess what he didn't see when I came up during Christmas were the dual 50lb UPS backup power units I had. Each was connected to the AP plus the DSL router and switch. I chose the components carefully. Not so much for performance as I did for reliability and power consumption. I also have learned that less moving parts means more reliability. So I didn't put a running server up as the firewall/router, but used a solid-state off-the-shelf one that only has a limited number of ports. This way, I have battery backed power always available for the network and it's enough to power the entire network for 7 or 8 hours. Which exceeds 95% of outages.

So why back up the network? Because the Network IS the Computer. That's another mantra our company has preached for like that last two decades. But more importantly, I've learned from my mistakes. It was pretty embarrassing a few years ago when I was helping a friend setup his server for a Linux startup. The whole rig was in his garage. We bought a boat load of big UPS's to back up the servers. But we completely forgot about the network and on our first power outage, the servers were fine, but the network was down. That was pretty stupid and I've gone on to refine how I get more reliability into my networks. Some tips I remind myself with:

  1. With UPS, size matters. Bigger means more power longer.
  2. Cheapest and Simply are often better. Instead of forking over big bucks on name brand UPS power for your home, the biggest and simplest way to get power is to daisy chain a couple of deep cycle 12VA batteries to one of those plug in rechargeable Outdoors/Jumpstart camping power packs with CAR/12VDC cigarette lighter socket.
  3. To support multiple devices, buy a car cigarette lighter socket power-strip. It's got like 4 or 5 sockets. Then for each devices, I get a DC-to-DC converter that has a Zener diode to down-step the voltage. So this is why I select network devices for their DC power requirements. As long as they are under 12 VDC, I can buy/make an adapter cable that will power the device.
  4. Forget those 12VDC-to-110VAC inverters. Too much power loss in the conversion to AC and conversion back into 9VDC/1.5 amps. You get LONGER backup times going DC-to-DC only.
  5. Don't put a PC running Linux as a NAT firewall/router/DHCP server, even if it's a VIA Eden MoBo unit. It's less reliable and eats LOTS more power and requires a tonne of maintenance, plus you still need to power a switch. Most common WiFi AP routers have all these functions plus remote manageability.
  6. Wi-Fi APs with router and switch are much more integrated, lower power, and more reliable than separate switch, router and dumb AP.

Wednesday Aug 18, 2004

Eviction, quiet servers, and IPO


I got a notice last week from Workplace Re-location folks that my current office in Menlo Park is now slated for flex office space. In short, I've been evicted. But I didn't lose my office. I just moved around the corner to a bigger and brighter one.

Don't get me wrong. I think flex is a great idea that works for many folks. Coupled with Sun's iWork initiative where employees work remotely over VPN from home, it's really given us freedom to work where we need to be. In fact, I'm actually working in a flex office right now in San Jose because I needed to attend meetings here, today.

Sometimes I wish there was an option for me to go flex, too. But the WR folks haven't got an option for us engineers that have development systems that we need hands on with. But I have an idea how they could do it if any of them are reading this. Basically, in addition to flex-offices, WR folks need to provide a flex-racking space to permanently host our development systems and have terminal server console access and Lights-Out-Management on those boxes. And instead of an office to put these systems in, we are given locked drawers or lockers, and a shared office with a couple of desks and KVM-switched keyboard, video monitor, and mouse where we can attach to any of the systems in the rack. The only requirement would be of course that we still have physical access to our boxes on this flex rack, and that the flex-desk space is separate from the flex-rack so we can work in peace and quiet, and that we have adequate cooling, backup UPS power, and unfettered network access and shared workbench areas.

Some might call that a lab space. But labs tend to operate on a shorter-term project-by-project basis. They are large rooms filled with benches and noisy racks. And many have special network configurations, or physical access controls. And labs tend to be away from the mainstream office space and this reduces the local watering-hole/breakroom socialization aspect of office space in buildings.

Peace and Quiet

For now, I'm content with the new office. It gave me a new opportunity to clean out my existing junk and consolidate hardware plus reduce the noise level in my workspace. With 4 servers running in my office, the deciBel level can be deafening. I pretty much need to mute my phone all the time in conf calls. Power supplies are one contributor, but increasingly, I've noticed that disk drives have been getting really noisy. And this isn't from the newer, faster RPM drives, but from the old small drives; a sure indicator that bearings are shot on some of my older disk drives.

One box in particular was a dual-300MHz cpu Ultra-2. It had dual 4 GB SCA SCSI drives occupying the two bays, and these things were whining badly. No surprise since the box and disks were over 5 years old and average uptime between boots has been 200+ days! It had been serving as the group web server/java app server for quite a few years now and was low on disk and noisy. It's response was still snappy however, and it'd be a real waste to scrap the machine and pay for newer capital when really, it just needed a disk upgrade. Since I had hot backup on another machine already, I didn't need to replace the drives with mission critical hardware, so I sourced eBay and other local surplus stores in the Valley for SCA SCSI drives. To my surprise, I found a store in San Jose that carried 9.1 GB SCA drives, refurbed, from $2.99 and up. I picked up some 10K RPM Quantum Atlas II 9.1 GB Low-profile drives which formatted perfectly and worked great. And they were just $19 each with warranty. In fact, these drives worked as well as any newer SCA/LVD/SE SCSI drive. I just powered down the system, took out the old drive, removed the Sun spud bracket off the old disk, and put it on the new disk; then I slid the disk into the bay, booted the system, and immediately after the POST, I Stop-A the system and type 'reset' at the boot> prompt. I can confirm that the disk is found by Stop-A again after the reset operation, and running 'probe-scsi' at the boot> prompt.

Replacing the boot disk was pretty simple. I had upgraded the box about a year ago to Solaris 9 from whatever it was before that. I wanted to keep the boot configuration exactly as it was. So with the boot disk still installed, I powered off the system, installed the replacement drive in the second bay, reset and reboot the system, then formatted the new drive, partitioned and labeled it. Then I created a UFS filesystem on the disk for /root and any other partitions, and then mounted that under the current FS as /mnt or other mount point. Then I ran:

       # ufsdump 0cf - /dev/rdsk/c0t0d0s0 | (cd /mnt; ufsrestore xf -) 

Afterwards, I ran the installboot(1M)

       # installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/c0t1d0s0 

And then I powered down, and physically replaced the first disk with the second, and rebooted with reset. With a new second drive as well and the /export/home directories restored on that second drive, I now had doubled the disk capacity cheaply and quickly, I've now reduced the noise from the failing bearing to a dull buzz of the power-supply fan. Not bad for more peace and quiet. I'm now on the lookout for some quiet 36GB SCA Low-profile SCSI drives at a bargain. If they have them discounted in the 10 pack, I'm interested.

dotCOM IPOs All Over Again?

So what's up with Google's IPO? I heard on the news that it was repriced somehow and SEC had not replied back with a confirmed IPO date. I was amazed to hear that it was originally priced at $130 or so, and they downgraded the IPO price to $85? That's a huge hit. For a second there, I thought is was the dotCOM boom all over again.

I'm not sure if I'm going to buy any. I'm a conservative investor-type. I follow the Peter Lynch model of buying what I know and buying stocks of companies that I'd buy from. For example, it's not hard to invest in a sound enterprise computer hardware/software business. If the hardware and software solutions are compelling for a large enough enterprise market and the margins are acceptable, then the business model is purely based on execution and track record. That's a no-brainer for any investor. Then there are investments in consumer products companies. For example, I shop a lot on-line for outdoor gear. My favourite store is Cabela's. I've been a loyal customer for about 20 years now. BTW, they run ATG software on Sun, too. They have incredible customer service and great prices. They went public (stock symbol CAB) back in early July I think, and they've gone down a bit but come back a bit too. I buy what I know. And I know Cabela's gets at least $1000/yr in business from me.

With Google, I'm not sure. I've never paid for any of Google's services, so I'm pretty sure they make their money on advertising and through affiliations and selling subscriptions to their search engine software. But I'm not sure what kind of business it can be or what the volume is. If they started charging me to use it, I'd probably drop the service. If they got my telco to price it into the subscription model, maybe I might just pay, but maybe I'd switch to a new provider too. Or, maybe, I'd charge Google back for spidering my websites since my existence benefits them too... hmmm.

I know when a very popular online Free Email provider tried to charge me $30/yr for POP access, I cut my ties and migrated my Mom, Dad, Wife, and In-Laws to my server at home too. As the number of mailboxes I host increases, the move looks smarter and smarter. Plus, I'm getting better privacy since now I'm in control of my mail, and it ain't some lowly paid employee admin hack that can sit around and monitor my mail. And plus, I use the same open spam site filters that these guys use. Of course, I have to remember that not everyone knows how to host their own web/mail servers and that I do spend more in ISP costs. But even if I couldn't host my own email, My Dad and Inlaws currently use Access4Less.NET, a $5.95/month Nationwide no-frills and zero-support dialup ISP. POP mailboxes are included with that subscription. So there's no need to mess around with any of these freebie email providers that aren't really free.

So, if these free service dotCOMs aren't making money from me, it's an interesting question where the money's at. Maybe from a speculative position, lots of investment banks are simply waiting for secondary markets to buy up shares before they cash out and make off with a measely 100% profit. Small compared to investors during the dotCOM boom when investment banks and underwriters at even the second/third round were getting a hold of stocks and making 1000% in just a year. But I could be wrong and there is a business model in all this...




« July 2016