Monday Jun 05, 2006

Farewell

This is likely my last post on Code Complete. My last day at Sun was technically last Friday, as I started my new job today :). It's been a great six years at Sun, I've learned so much. However, it's now time to open a new chapter in my life, and try out something completely different. So -- I'm going to join the software team at Pixar, and make some movies. Completely different industry, completely different kind of company. Promises to be an exellent experience all around. As for Code Complete, maybe I'll actually have time to post here, should Sun move my blog to an "alumni" area, as I've heard they were planning to do. Otherwise, enjoy the archives!

Wednesday Dec 07, 2005

Buildery

It wasn't a word 5 minutes ago, but it is now. I spent all night practicing Buildery, or the act of building vast numbers of Solaris packages. Just as soon as Subversion 1.3 is released, I will unleash a raft of new packages via Blastwave. This will include Apache 2.2.0, Subversion 1.3.0, neon 0.25.4, and SVK 1.05. Most of the long night was going through the Blastwave build system and ensuring that configurations and package admin files for all of the 32 Perl modules that SVK depends on are up to date. Actually, SVK has 49 dependencies, if you count VCP and it's 16 dependencies, but I stopped counting VCP because I can't get it to work properly. VCP is an interesting module. It is used by SVK to allow the same command set to operate on Subversion, CVS, or Perforce repositories. I'm not certain how many people would actually use such a feature, but looking at the code, it appears that a great deal of effort has gone into it.

This work is part of an effort to get the build system back into shape after a long hiatus. In the beginning, I downloaded a version of GARNOME, one of the (finest) build systems for cutting edge GNOME sources. At the time, I was just trying to install a GNOME environment on my Solaris 8 desktop, as I can't really stand using CDE for any length of time. Also, as I was building GNOME, I was building a set of GNU utilities which I used for development and production support in Sun's Newark manufacturing facility. Each distribution was compiled and packaged manually, and was taking a long time, as the number of useful utilities I wanted to provide was increasing.

Eventually, I decided that it would be a great idea to put these package builds into a GARNOME style build sytem. GARNOME itself is based on the GAR ports system, which was developed by the Linux Bootable Business Card (Linux-BBC) project. GAR makes building software extremely easy -- just set a fiew fields in a template Makefile, and away you go. Over time, each time I received a request to add more Perl modules to our servers, or identified a new tool which was required for some purpose, I'd stick it in my GAR system. I then developed a GAR extension (GAR supports additional extensions as included make files) which would create Solaris packages. This system grew to approximately 300 software distributions, bundled into four packages (most of the 300 were CPAN modules).

In August 2004, I joined the Blastwave project. I had decided at that point, that it made no sense for me to maintain 100% of the packages that I used, when someone else was already doing it for me. I would contribute the packages that I build that nobody else had picked up, and reduce my workload. From day one, I used the GAR build system which I adapted for internal use. However, the Solaris packaging system, at this time based on make rules only -- phear, was error prone, and not as flexible as I'd like. For one thing, it could really only build a single package out of each build directory, and I needed it to build any number of packages from a single distribution install. In response, I split out the packaging logic into a utility called mkpackage.

The mkpackage utility reads a spec file, and performs some actions as a result. The idea is roughly like RPM, only it is far simpler, because, for better or worse, SVR4 packages are much more limited than RPM. It would solve so many problems if SVR4 packages adopted a provides/requires system, or at least supported minimum/maximum package version numbers rather than exact match. Maybe this functionality will appear once the pkg tools are eventually released to OpenSolaris. Anyway, the spec file is very simple. Here is the spec for apache2c, the core Apache 2.2.0 package:

%var            bitname apache2c
%var            pkgname CSWapache2c
%include        url file://%{GARDIR}/pkglib/csw_dyndepend.gspec
%var            desc Apache 2.2 web server (core)
%copyright      url file://%{WORKSRC}/LICENSE

Simple enough. The %var directive sets a variable for use later in the same file, or any included files. In the above case the bitname variable is used to construct the package filename. The %include directive does exactly what you think it does -- include another spec file. This directive takes two arguments: the first is a method, which in this case is url, indicating that the final argument is a URI which mkpackage must fetch. Any URI which is supported by LWP::UserAgent can be used here, including http://, ftp://, https://, and so on. The other supported method is exec, which will run some command and include the result.

The content of any %include is evaluated immediately. In the above case, the include resolves to this spec:

%include        url file://%{GARDIR}/pkglib/csw_vars.gspec
%include        url file://%{GARDIR}/pkglib/csw_prototype.gspec
%pkginfo        url file://%{GARDIR}/pkglib/csw/pkginfo
%depend         url file://%{GARDIR}/pkglib/csw/depend
%include        url file://%{GARDIR}/pkglib/std_depend.gspec

This is mostly includes, but shows the pkginfo and standard depend file for CSW (Community SoftWare -- Blastwave). There are several directives like %pkginfo which correspond to the different package admin files that SVR4 packages support, including %prototype, %preinstall, %postinstall, %space, and so on. These are treated a bit differently than an include. They create a file in the mkpackage work directory (configurable) called %{pkgname}.<admfile>, e.g. CSWapache2c.pkginfo. As the source file is written into the work directory, any %{variables} in that file are replaced with variables from the environment, those fabricated by mkpackage, and any set in prior %var statements. For instance, the standard CSW pkginfo file is:

PKG=%{MKP_PKGNAME}
NAME=%{MKP_BITNAME} - %{MKP_DESC}
ARCH=%{MKP_ARCH}
VERSION=%{SPKG_VERSION}%{SPKG_REVSTAMP}
CATEGORY=%{SPKG_CATEGORY}
VENDOR=%{SPKG_VENDOR}
EMAIL=%{SPKG_EMAIL}
PSTAMP=%{SPKG_PSTAMP}
CLASSES=%{SPKG_CLASSES}
HOTLINE=http://www.blastwave.org/bugtrack/

The MKP_ variables are supplied by mkpackage, and SPKG_ variables are exported to mkpackage by GAR. Everything is replaced when the pkginfo file is written, and any variables that cannot be replaced will cause mkpackage to complain.

As I mentioned earlier, there is a way to execute any arbitrary command and include the result. It is also possible to execute an arbitrary command to create one of the standard files. This is most often used to create depend and prototype files. Default outputs from pkgproto may be fine for most cases, but Blastwave has a set of standards for prototypes that need to be followed. That means that certain ownership needs to be assigned to files, and certain directories and filetypes need to be excluded. For this, I created a simple program called cswproto which takes arguments similar to pkgproto, and spits out a CSW-compliant prototype file. This utility is called as follows:

%prototype      exec cswproto -s %{TIMESTAMP} -v basedir=%{DESTDIR} %{DESTDIR}=/

This finds any files in the destination directory %{DESTDIR} which are newer than the timestamp file %{TIMESTAMP} (created and updated by an implicit make rule in GAR). As I don't build as root, I install software off to a temporary directory (usually /tmp/a). This means that the resultant prototype from cswproto always expects to pull the real file from /tmp/a/some/real/file. However, in the GAR system, it is possible (and useful in a number of circumstances) to change the DESTDIR at will. This makes life difficult for prototype files with a fixed DESTDIR path. The -v switch to cswproto replaces instances of %{DESTDIR} in each line of the prototype with the package build time variable $basedir. Before the transformation, a sample prototype line might be:

f none /opt/csw/lib/libfoo.so=/tmp/a/opt/csw/lib/libfoo.so 0755 root bin

After the transformation, the same line would be:

f none /opt/csw/lib/libfoo.so=$basedir/opt/csw/lib/libfoo.so 0755 root bin

The basedir variable is then supplied to pkgmk, so that it can find all files in the prototype, regardless of the current DESTDIR setting.

This system has proven to be very flexible -- I can now build any number of packages I require from a single software installation. For example, after building and installing PHP 5, I can now build 21 separate packages out of the installed files, separating out less commonly used modules that might have additional dependencies. This benefits users directly because less dependencies means less software to download for common tasks, and less disk space occupied by bits that will never be used.

Wednesday Nov 30, 2005

Subversion 1.3.0RC4

Subversion 1.3.0 RC4 is now officially released -- let the games begin. The release notes are here, and the detailed change log is here.

Tuesday Nov 29, 2005

Rainout

Well, it's finally raining in the Bay Area. Soon, there will be enough snow in the Sierras to make my recent ski gear purchase worthwhile. I can't believe that it's taken this long for winter to finally arrive -- fire warnings and 80+ degree weather at the end of November is unnerving. Global warming at it's finest.

On the technical front, I'm anticipating the release of Subversion 1.3. I have been building and testing out the release candidates for the last month or so, and things are looking good so far. The current Blastwave Subversion packages use neon 0.24.x for WebDAV, but when I release the 1.3 packages, we'll bump to 0.25.4. One of the best improvements in neon between 0.24 and 0.25 is interruptability. If you've ever started a svn up operation on a huge repository, and then changed your mind, you'll know what this change will bring. Right now, pressing Ctrl-C to interrupt a long running svn client operation has no effect until the server responds. With neon 0.25, pressing Ctrl-C will cancel the operation immediately, removing one of the biggest annoyances from the svn command line client, IMHO.

Even better, there was an announcement this week that Google has awarded an internship to Justin Erenkrantz (of ASF fame) to work on Subversion. His primary project will be to develop the SERF WebDAV library, and integrate it with Subversion. While neon works fine for Subversion at the moment, it's usage within Subversion implements a protocol on top of WebDAV, defeating some valuable features of WebDAV. For instance, rather than using simple HTTP GET requests for most data, the client and server speak using custom REPORT requests. One downside to this is that proxy cache servers cannot cache REPORT operations, which limits the usefulness of caching to improve the speed of repository operations (e.g. cached diffs). Current plans are to have this functionality integrated in time for SERF to be a compile (or run) time option by the 1.4 release.

Monday Nov 28, 2005

Ping

It has certainly been a long time since my last post. Unfortunately, most of the activities I'm associated with at Sun are covered by NDA/CDA, so I'm unable to post anything about them. To exacerbate the situation, I've also been devilishly busy. Both excuses have conspired against my blogging habits. However, I'm now in a slightly better place in terms of the amount of brain power I can devote to blogging. With any luck, I'll be able to keep the ol' blog space updated regularly for at least the next six months! :).

I keep a pretty close eye on what is happening in the Subversion community. I have recently toyed with the idea of writing a regular series with a name like "This week on svn-dev", in a similar vein to This Week on perl5-porters. If you are interested in Perl development, this is the place to look. It summarizes the goings on of the perl5 development effort, including discussions on design of features and bug fixes. There are always interesting discussions on the Subversion development alias, most of which aren't well known by anyone except those that are able to filter through ~200 emails per week to find the good stuff.

I am also looking into the possibility of writing some articles on PSH (specifically fmd) internals. I've done a good deal of work with FMA in my day job recently. In fact, I wrote an object-oriented Perl API around the fault management configuration and reporting tools provided with Solaris 10 (fmadm, fmstat, fmdump). I then implemented a number of utilities on top of that API which are used in the manufacture of most of Sun's products (Currently Sun Fire 2900/4900/6900/20K/25K and US-T1 platforms [Niagara]). The tool set is primarily used to check, count, and classify hardware faults. For instance, I wrote a tool which dumps the contents of the fmd error and fault logs (which are extended accounting files (see libexacct(3LIB)) as XML data which is stored for historical analysis of FMA performance in our manufacturing processes.

Perhaps I will be able to publish some initial articles this week -- only time will tell. Unfortunately, blogging is icing for me, not cake.

Thursday May 05, 2005

ROFL!

Wow. I mean... Wow. I just passed by Bill Walker's blog and found a link to this 'nerd test.' Of course, once I found it I knew I had to take it just to see how... uhh... not nerdy I am. So, here are the results:

I am nerdier than 95% of all people. Are you nerdier? Click here to find out!

That's right folks! Nerd God -- and I didn't even cheat on the test! Say it with me: "I, for one, welcome our Nerd overlord."

Friday Apr 29, 2005

Sub-diversion

My last update indicated that Subversion 1.2.0-rc2 packages might be released this week. After a bit of testing on one of my sandbox machines, it appears that a long standing bug somewhere in the system is still causing me headaches. Back in the Subversion 1.0 timeframe, when I started building Subversion for testing purposes, I'd get a message indicating that character translations for messages could not be found. The messages were usually of the form "Cannot create converter from native to UTF-8" or vice versa. This would affect commits to the repository, but not checkouts or other non-write operations. I managed to put together a patch that would cause the messages to go away with no apparent ill effects, but the fix makes me uneasy -- it requires me to override the macro APR_LOCALE_CHARSET a couple of times in the libsvn_subr module utf.c. My patch sets this value to the string "ASCII" instead, which appears to do the trick, at least in my locale.

The patch does not apply to svn 1.2.0-rc?, so now I need to go and find out if the problem is really with Subversion, or with APR (0.9.6). I am not able to find much on the subject by searching mailing lists, or googling for the relevant error terms. It appears that the issue is only present on Solaris, and there aren't that many Subversion users out there running on Solaris just yet. If anyone has ideas why I would be seeing this behavior, please drop me a line!

Wednesday Apr 27, 2005

Long time... no update

Yes, I realize it's been quite a while since my last update. Shame on me. However, things have been busy lately -- so much so that writing a blog entry was the last thing on my mind. Unfortunately, there are times when I just can't blog what I'm doing. Either that or I get lazy, take your pick. I'm not going to promise to update my blog daily this time, in hopes that the Universe won't hear me and give me even more work to do (jinx).

Now that I'm in one of the relatively few non-hectic periods in my career, in between major projects, I do have some time to update. I've updated several of the packages I maintain on Blastwave. The most significant is Apache 2.0.54, which is now split out to allow users to select either the prefork or worker multi-processing modules (MPM). This is a long standing request from the user community, who would like to run PHP, but are not quite ready to run it on the threaded worker MPM which was the default compiled-in MPM in the last Blastwave apache2 release. The default out of the box for Apache 2 is prefork, and so the default for Blastwave apache2 will revert to prefork. Those who don't rely on non-threadsafe third-party PHP libraries can continue to run their code on the worker MPM, if desired.

A new build of Subversion 1.1.4 using berkeleydb 4.3.27 will be released later this week as well, which should provide some performance and administrative benefits (e.g. automatically removing unused dblogs). XChat 2.4.3 was released today, with a bunch of improvements and bug fixes, and finally, new Subversion 1.2-RC2 and mod_perl 2.0.0-RC5 packages should be released some time this week or next for testing only.

Wednesday Dec 22, 2004

Sleepless

ollie I am a sucker for punishment. In addition to project deadlines, holiday shopping, and holiday traffic, my wife an I decided that it was time to try the dog thing again. Our last foray into dogville was difficult -- we adopted a field stray Golden Retriever from a local shelter. She had many latent health problems, and after about a month and a half with us, she passed. That was nearly a year ago, so we figured it was time to try again. We adopted a 3 month old Golden Retriever puppy two and a half weeks ago. While the stress of having a sick dog was significant, the stress of not knowing how to get a new puppy to (a) stop barking, (b) stop trying to eat the cat, or (c) stop rolling in various items in the yard is equally significant. Compounded with the aforementioned stress enhancers, I'm about one step away from a full on aneurism. Not really. It's easy to love that face -- say hello to Oliver.

Wednesday Dec 08, 2004

Variety is the Spice of Blastwave

With all of the talk recently regarding the Blastwave project, I polled some of the maintainers to get an idea of the reasons these people give of their free time (and in many cases, of their pocketbook), to support open source packages for Solaris. The response was great, and therefore I've kicked off an effort to document some of these reasons. Articles are in no particular order, beyond what I feel like writing about in a given day :). And now, onward.

Life without spice, like an OS without a color capable ls command, is bland. In the past, the variety of software available for Solaris earned it a reputation for being a bit bland. Solid as a rock, but at the end of the day, rocks alone don't get the job done. Since the very early days of Solaris, system administrators, engineers and other technically inclined Solaris users decided to take the entrepreneurial approach. They began to compile and package for Solaris the vast number of software projects in the wild which typically only target Linux. These projects made Solaris more enticing to users and admins that in the past were spoiled by the riches of open source projects readily available for Linux.

Blastwave continues in this proud tradition of from-the-people-for-the-people, providing more functionality for Solaris, improving it's usablility, and at the same time, it's reputation. There is a stunning array of software that Blastwave makes available in Solaris standard SysV packages. At the time of this writing, the most recent release of great note is Gnome 2.8. Installing or upgrading Gnome on a Blastwave system is a snap -- just run pkg-get -i gnome, and all of the required packages, including dependencies, are installed for you.

If you're not into Gnome, you can always grab the K Desktop Environment (KDE) instead. It is packaged with end users in mind, including dtlogin integration, so after installation, it's as easy as logging out of your current environment and logging back in in KDE! And the fun doesn't stop with the desktop (although it could, if you're happy at that point). If you are an administrator, the variety of software available, including Apache (both 1.3.x and 2.0.x), MySQL 4.0.x, Subversion 1.1.1, and many, many more. In fact, at the time of this writing, Blastwave has 932 high quality open source software packages available -- just install and go!

Check out Blastwave today, and see if the software you need is available. If it isn't available, check the request form to see if someone has already requested it, and if not, add it to the list (or better yet, build a package and sign on as a maintainer).

As always, if you or your company are powered by Blastwave, please consider donating to the project, purchasing a single DVD, or even a DVD subscription -- every little bit helps to ensure the future of high-quality open source software packages for the Solaris Operating Environment.

Monday Nov 29, 2004

S.O.S -- Save Blastwave!

For the last several months, I've been involved with the Blastwave project. This project produces high-performance, high-quality Solaris packaged open source software. As many packages as possible are compiled using the Sun One compiler suite, and many packages include both 32- and 64-bit libraries and binaries. The packages aim to provide the best out-of-box experience to the end user, with little or no post-installation configuration required. Some of you may have heard of this project; some may be using these packages right now. It's the ideal way to get open source software like Gnome 2.6.2 and KDE 3.3.1 up and running on everything from Solaris 8 through to 10, on both SPARC and x86 processors (with AMD64 support on the way). In fact, there are currently 878 packages available from Blastwave. Packages are available via a package management utility, pkg-get, which is much like the Debian Linux apt-get utility. This makes it easy to update existing packages, or find new packages to install. A much better alternative is to receive all of the packages on a single DVD! The single DVD is available for USD$20, with a monthly subscription service available for 20% of the single DVD cost.

So, now that I'm done with the sales pitch -- what's all this about an S.O.S?!? Well, this project has so much momentum at the moment on the community side, but next to no momentum on the financial side. The project's founder, Dennis Clarke, has footed the bill up to this point -- that includes server space for the build farm, bandwidth for synchronizing mirror sites, bandwidth for the web page, and up-front costs for production and shipping of the DVDs (not to mention the labor). At this point, the project accounts are dry, there are a scant 30 DVD subscriptions to date, and there have been no offers of corporate support or sponsorship! Some of the maintainers have managed to scrounge up servers to support update and maintenance of the build farm and supporting servers, but that's just part of the solution. For more background info, please read this thread on comp.unix.solaris started by Blastwave maintainer Mark Round.

So my plea is this -- if you use Blastwave, please buy a DVD, or better yet get a subscription. Save yourself the time spent downloading! If you are part of a company (or own a company) that is powered by Blastwave, please consider lending the project a hand through corporate sponsorship. The project has come so far, and achieved so much that it would be a disaster to lose it all. Please support Blastwave and Solaris Open Source software!

Friday Nov 12, 2004

Run, IPC::Run

I do a lot of work that involves automation -- both product and software test. This week, my task was to automate some testing of Perl code which drives the Solaris 10 fault management log viewer fmdump. The fmdump utility allows adminstrators to view system errors and faults with varying levels of verbosity. In order to provide our manufacturing test process with information about faults that occurred during test, tools are required which can parse the varying information output by fmdump, and provide feedback to the test supervisor. Now, once you've created a tool which relies on a system utility, how do you test it in environments where that utility may not exist? Most often, this is done through the use of program stubs -- a program which behaves like the original, but is really just faking it.

Faking it in this case means that for any set of arguments, the stub program should return the same output as the original utility would. In our program stub terminology, this involves playing back a control file, which tells the stub what to output, when to output it, and what exit status to return at the end. I wanted my stubs for fmdump to be able to call on the real program in cases where an existing stub did not exist, and create a control file. Then the next time that same argument combination was passed to the stub, the generated control file would be replayed. While thinking about this problem, I happened across an extremely interesting Perl module called IPC::Run.

IPC::Run is a module which makes calling, managing, and collecting output from subprocesses easy. It is an extremely flexible module, which allows programmers to hook in at an API level that makes sense for their application. At it's simplest, the function run() behaves like system(), but inverts the result so the code makes more sense:

# Old way, using system
system("diff fileA fileB") and die "Failed to run diff!";

# Newer way, using IPC::Run::run
run("diff fileA fileB") or die "Failed to run diff!";

The run() function is probably as far down the IPC::Run API foodchain as most people need to go, as this function allows huge flexibility (which we will see shortly). This function is flexible enough to serve most tasks. For instance, if I wanted to collect the standard output of the above process in a scalar, I'd change the code to this:

# Create a scalar to store the output
my $out;

# Collect that output
run(['diff', 'fileA', 'fileB'], '>', \\$out)
    or die "Failed to collect diff output!";

Easy as that. Want the standard error too?

# Create a scalar to store the output
my ($err, $out);

# Collect that output
run(['diff', 'fileA', 'fileB'], '>', \\$out, '2>', \\$err)
    or die "Failed to collect diff output!";

There are a large number of redirections that can be implemented using run, including standard input, output, error and more. Also, instead of scalars for data sources and destinations, you can specify filehandles and even subroutines. The latter is what I needed for my control file generation. I wanted to be notified each time a line of output was created on either the standard output or error, and needed to record the time a line came in, with reasonable precision. To do this, I used the following call to run:

# Get hi-res versions of sleep and time
use Time::HiRes qw/sleep time/;
use IPC::Run qw/run new_chunker timeout/;

# Command to execute
my @cmd = qw/fmdump -V/;

# Array to collect 'events'
my @events;

# First, record the command and args
push @events, [ time, 'cmdline', join(" ", @cmd) ];

# Create control events
eval {
  run(\\@cmd,
    '>', new_chunker, sub { push @events, [ time, 'stdout', shift ] },
    '2>', new_chunker, sub { push @events, [ time, 'stderr', shift ] },
    timeout( 5 ),
  );
  push @events, [ time, 'status', $? >> 8 ];
};
die if $@;

This snippet of code runs the 'fmdump' argument combination I requested, and records information about how the command behaves. The new_chunker method in the filter chain of the run() command for both streams is a filter provided by IPC::Run. This filter brings in input from the process and splits it into lines (by default using the "\\n" separator). Each line is then passed on to the anonymous sub, which records the stream source, time, and line in the event array. When the process exits, the routine then records the exit status in the event array. Note the use of the timeout() function. This specifies to run the number of seconds to allow the task to execute before giving up and throwing an exception. That exception is the main reason for invoking run() from within an eval block. This prevents misbehaving argument combinations from hanging forever. For instance, if someone specified fmdump -f, fmdump attempts to 'file tail' the fault log, and won't exit until it receives a signal. Using a timeout ensures that this will only last for 5 seconds, and will not generate an incomplete control file.

Now that we have the full details about what happened during the process run, it's time to do a bit of housekeeping. The first step is to go through the event list and calculate the line to line delay based on the timestamps. Times are then floor()d to a reasonable precision, rather than keeping the full Perl floating point precision. For this particular application, the precision is 4 digits (0.0001s or 100µs). The event list is then written to a control file on disk, encoded so that the same combination of program and arguments will generate the same control file name. All that is left to do now is to run the Perl module test suite, pointing it at our stubbed fmdump rather than the real one. This generates all of the control files necessary to run that suite of tests. I then can check these control files into revision control as part of the test suite. It is then possible to run the module test suite in any environment.

Tuesday Nov 02, 2004

Liquid Layout

Yesterday evening, I thought I would log into the ol' blog from home, and type a few words about the really creepy movie The Grudge that I saw on Sunday night. Unfortunately, when I loaded up the page, the entire sidebar was scrolled off the side of the browser screen. I had taken special care to ensure the page would scale for other view sizes, but I guess somewhere along the way, something broke the very complex table based layout I use. I vowed to fix it as soon as possible.

Long story short (how often does that happen here?), I rewrote my page templates as a two column liquid layout, using CSS for positioning. I used the excellent Floatutorial from Maxdesign to get the basic structure of the page right. Then I inserted the Roller content tags, and voila! A page which scales correctly, and now even displays properly in text-only browsers like lynx! Not to mention that now I don't have to keep a score sheet next to my keyboard for tracking down missing table cell close tags.

Wednesday Oct 20, 2004

About Everything

Ok -- It appears that my blogging habits are a bit 'binge and purge' like at the moment. Multiple posts per day for a whole week, then nothing but crickets for a week after that. I have a good excuse, however, as I was taking a few days of much needed vacation time. I try not to touch a computer that isn't running a game during my vacations, unless I find some good reason to work on one of my Open Source projects. But I do make a concerted effort to stay away from work activities (including blogging :)).

After a couple of days of upgrade pre-planning, including making some much needed backups, etc., I installed my new MSI Neo2 Platinum motherboard. I struggled a bit with the new bolt-through heatsink designs used on newer motherboards. The MSI board comes with a support plate on the rear of the motherboard and a clip harness on the front for mounting of the standard CPU heatsink which comes with the Athlon 64 retail package. Of course, I bought the OEM version and a different heatsink which requires a different backing plate, and a different front-side assembly. The stock backplate was glued onto the back of the motherboard, and the process of prying it off with a screwdriver was a very high anxiety activity. I even ended up in the OpenForum on Ars asking whether this was normal, and how I should go about getting the plate off without destroying my board. Before I got a decent answer from the forum, I had managed to free the plate from the board -- no damage done. nice.

The heatsink installation went well after that, but I had exhausted most of an evening getting the heatsink installed properly. The next day, I stripped the old components out of my case and did a full brain transplant. Aside from the initial heart attack I had when I touched the power supply button and heard a spark, and got no power to the board, things went well. I had not connected the second 5V power connector to my video card (really, it's easy to forget to apply extra power to an expansion card, sheesh.), and I guess the card was trying to draw too much power or something, which was keeping the system from powering on. After connecting the extra card power, everything came up fine. I ended up with some additional headaches trying to get WinXP to boot after installation, owing to the messed up partition table on my boot drive, courtesy of Fedora Core 2 (I won't go into the details... therapy has just about erased this part of my memory). A bit of BIOS tweaking, and I was up and running. Everything is better now -- colors are brighter, digital birds sing louder, and that annoying growth on my .... oh wait... work blog. AND, I get sustained ~40fps in the Counter Strike: Source video stress test on highest quality settings. And this is untweaked -- I've been able to overclock my 2.2GHz chip to just over 2.3GHz without crashing, so I might be able to squeeze a bit more performance out using the Dynamic Overclocking Technology (DOT) that is provided by the MSI Neo2 platform.

Speaking of games, I really like the new Valve Steam service. It's kinda like what other companies have been promising regarding software as a service. You download the Steam client, and can browse the catalog of games (both Valve and 3rd party) available on the service. I bought the Half-Life 2: Silver pack, which allows me to download most of Valve's game catalog for play. I don't have to worry about game updates, content packs, or external server lists (e.g. gamespy), as Steam provides all of these features. When I re-install my system, I just have to enter into my account and it will download the bits I own to my system again. No trying to find the jewel case with the right serial number anymore. Cool idea.

Tuesday Oct 05, 2004

Ahoy AMD64

Despite the issues newegg.com has with their website (I still can't access it with Firefox, but it works just fine with IE6 :(((( ), they do work hard to deliver. I ordered my hardware late Friday morning, and the hardware was delivered to my door at 11:30AM on Monday. Excellent. Now the fun work begins, as I figure out how to shuffle data from 3 IDE HDDs onto two, back up all of my critical data from Windows, and then reinstall the whole system once the new hardware is installed. Should be interesting.

About

comand

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today