DTrace on LKML

So DTrace was recently mentioned on the linux-kernel mailing list. The question in the subject was is "DTrace-like analysis possible with future Linux kernels?" The responses have been interesting. Karim Yaghmour rattled in with his usual blather about the existence of DTrace proving that LTT should have been accepted into the Linux kernel long ago. I find this argument incredibly tedious, and have addressed it at some length. Then there was this inane post:
> http://www.theregister.co.uk/2004/07/08/dtrace_user_take/:
> "Sun sees DTrace as a big advantage for Solaris over other versions of Unix 
> and Linux."

That article is way too hypey.

It sounds like one of those strange american commercials you see
sometimes at night, where two overenthusiastic persons are telling you
how much that strange fruit juice machine has changed their lives,
with making them loose 200 pounds in 6 days and improving their
performance at beach volleyball a lot due to subneutronic antigravity
manipulation. You usually can't watch those commercials for longer
than 5 minutes.

The same applies to that article, I couldn't even read it completely,
it was just too much.

And is it just me or did that article really take that long to
mentioning what dtrace actually IS?

Come on, it's profiling. As presented by that article, it is even more
micro optimization than one would think. What with tweaking the disk
I/O improvements and all... If my harddisk accesses were a microsecond
more immediate or my filesystem giving a quantum more transfer rate,
it would be nice, but I certainly wouldn't get enthusiastic and I bet
nobody would even notice.

Maybe, without that article, I would recognize it as a fine thing (and
by "fine" I don't mean "the best thing since sliced bread"), but that
piece of text was just too ridiculous to take anything serious.

I sure hope that article is meant sarcastically. By the way, did I
miss something or is profiling suddenly a new thing again?

Regards,
Julien
Yes, you missed something Julien: you forgot to type "dtrace" into google. (If there were a super-nerd equivalent of the Daily Show, we might expect Lewis Black to say that -- probably punctuated with his usual "you moron!") If you had done this, you would have been taken to the DTrace BigAdmin site which contains links to the DTrace USENIX paper, the DTrace documentation, and a boatload of other material that supports the claims in The Register story. In fact, if you had just scrolled to the bottom of that story you would have read the "Bootnotes" section of the story -- which provides plenty of low-level supporting detail. (Indeed, I'm not sure that I've ever seen The Register publish such user-supplied detail to support a story.)

Sometimes the bigotry surrounding Linux suprises even me: in the time he took to record his misconceptions, Julien could have (easily!) figured out that he was completely wrong. But I guess that even this is too much work for someone who is looking to confirm preconceived notions rather than understand new technology...

Fortunately, one of the responses did call Julien on this, if only slightly:

\* Julien Oster:

> Miles Lane  writes:
>
>> http://www.theregister.co.uk/2004/07/08/dtrace_user_take/:
>> "Sun sees DTrace as a big advantage for Solaris over other versions of Unix 
>> and Linux."
>
> That article is way too hypey.

Maybe, but DTrace seems to solve one really pressing problem: tracking
disk I/O to the processes causing it.  Unexplained high I/O
utilization is a \*very\* common problem, and there aren't any tools to
diagnose it.

Most other system resources can be tracked quite easily: disk space,
CPU time, committed address space, even network I/O (with tcpdump and
netstat -p).  But there's no such thing for disk I/O.
Of course, the responder misses the larger point about DTrace -- that one can instrument one's system arbitrarily and safely with DTrace -- but at least he correctly identifies one capacity in which DTrace clearly leads the pack. And I suppose that this is the best that a rival technology can expect to do, so close to the epicenter of Linux development...
Comments:

I've gotten similar responses from some Linux users I know - they don't understand what DTrace is, and when you try to explain it to them, they get very defensive. What's wrong with being excited about DTrace? You guys should be ecstatic about it: It's an incredible accomplishment that you spent years working on. It's not as if every little tweak to the Linux kernel isn't hyped up. And I doubt that Linux will get something truly comparable to DTrace for many years, especially with Linus' disdain for profilers and debuggers.

Posted by Derek Morr on August 20, 2004 at 08:07 AM PDT #

"so close to the epicenter of Linux development..."

And you people at Sun lose market share for those ? Shame on you. Solaris is cleary aimed at servers, but linux also play a hole in the DESKTOP market. Linus has to balance improvements on both sides, and come on, who on the desktop area would use something like DTrace for ?

Maybe us developers, are a bit skeptical about DTrace, since we already DO what this tool is supposed todo, right ? Or there was no debug/profiling before DTrace ?

As shown in "http://blogs.sun.com/roller/page/bmc/20040805#demo_ing_dtrace", for me, DTrace it's nothing more then a big comestic integrated debuger/profiler, it should be marked as so.

I'm pretty sure Linus and most of the linux core developers wont change their minds, DTrace alike tools would not bring something revolutiory wich would help a great share of their users.

Posted by Richard on August 21, 2004 at 01:58 AM PDT #

"Solaris is cleary aimed at servers," Really, thats news to us. The ethos behind Solaris is to have the best features, scalability, stability and diagnosibility on the market. On any machine. "who on the desktop area would use something like DTrace for ?" You can use it for whatever you like. Thats the point. It's dynamic. It's easy to get into and it's so intuitive and can be used instead of other OS reporting tools as the data displayed is more user defined. You rally need to give it a go to understand. "Maybe us developers, are a bit skeptical about DTrace, since we already DO what this tool is supposed todo, right ? Or there was no debug/profiling before DTrace ?" OK, how do you debug your production code without installing a custom intstrumented binary/library? the point is you can just do it without compiling anything up, safely, in production. "As shown in "http://blogs.sun.com/roller/page/bmc/20040805#demo_ing_dtrace", for me, DTrace it's nothing more then a big comestic integrated debuger/profiler, it should be marked as so. " I am sorry, you need to use it. You just don't get it.

Posted by Jon Anderson on August 21, 2004 at 03:09 AM PDT #

This is silly. The guy was complaining that the article introduces the NEXT! C@@L! THING!!! without providing relevant details. The details which had been provided so far covered tools which already exist. DTrace is interesting because it ties the tools together with a nice language. The language is the interesting part, imho. The techniques it ties together have been used for a long time by high-end DB and embedded folks at the very least. (And the language isn't Java? But that's the NEXT! C@@L! THING!!! from Sun.)

Also, simply declaring that other people "don't get it" and dropping the conversation sounds like kook-ism. The Usenix paper does a good job explaining relevant, recent tools. A page or blog entry summarizing that section and collecting comments on similar individual tools for AS/400s, etc. would be helpful. Then you can point out how DTrace brings the tools together to provide a nice, easy, interactive interface to all of their capabilities. Referring people to such a summary page would go a long way towards helping dampen the apparent PR hype.

Posted by oy vey on August 21, 2004 at 06:13 AM PDT #

You write:
DTrace is interesting because it ties the tools together with a nice language. The language is the interesting part, imho. The techniques it ties together have been used for a long time by high-end DB and embedded folks at the very least.
You're missing a fundamental attribute of DTrace: the system is instrumented dynamically; all of "high-end DB and embedded folks" that you refer to use static instrumentation. There is a world of difference between these approaches: with static instrumentation, one must limit the points of instrumentation for fear of inducing unacceptable disabled probe effect. With dynamic instrumentation, one need not be concerned with such limitations -- opening up entire new vistas for system instrumentation. The D language is a contribution of DTrace -- but we see it more as a consequence of the latitude that dynamic instrumentation provides. (That is, DTrace does much more than simply "tie the tools together" -- it is something alltogether new and different.)

I don't have any intention of copying the Related Work section into my blog (interested parties should read the paper), but I am very interested in your comment about "similar individual tools for AS/400." Do you have details on such tools? I have great respect for AS/400 (which is almost certainly the best-kept secret in computing history -- and a technology that suffered immeasurably in the IBM Civil Wars), but I have found hard details difficult to come by. If you can point me to the details (just a name of the tool would be helpful), I'll do the research and post a blog entry comparing it to DTrace...

Posted by Bryan Cantrill on August 21, 2004 at 08:02 AM PDT #

Hello,

Just curious, what is the performance impact of DTrace on the system ? I've read the USENIX paper and all those post about DTrace, but none showed us benchmarks. Another question, there is any article about DTrace security ? (Give us more details about the VM).

Posted by Carlos on August 21, 2004 at 11:02 AM PDT #

The answer to the question of performance impact is "it depends -- but never pathological." It is very easy to phrase DTrace enablings that have an unobservable effect on the system, and it's likewise easy to come up with enablings that have a significant impact on the system. That doesn't really answer your question, but I think the experience of most has been that the performance effect of most DTrace enablings is not sufficient to change the performance characteristics of the underlying problem. For whatever it's worth, there is a chapter in the AnswerBook that addresses how D programs should be written to minimize performance impact.

In terms of DTrace security: DTrace can only be run by those that have the appropriate privileges, which by default is only super-user. For more details, see the Security chapter of the DTrace AnswerBook. If your question is more of the safety of DIF, I would refer you to Section 3.5 of the DTrace USENIX paper.

Posted by Bryan Cantrill on August 21, 2004 at 11:32 AM PDT #

Alas, I have no direct AS/400 docs. I'll ask next time I run into folks. I know the architecture makes dynamic instrumentation a part of everyday life; I think a bit of that experience has moved into kprobes and the AIX hooking mechanisms.

However, many existing tools do use dynamic instrumentation. The Paradyn toolkit has existed in the research world for, what, 9+ years? (first citation I know is 95) And Alpha-based systems pioneered commercially available dynamic instrumentation and rewriting in user-space (ATOM, etc.). There is an article in OSDI (99, I think) about dynamically instrumenting kernel code. You cited almost none of the existing work on individual pieces, which is why some people dismiss DTrace as hype. I realize DTrace is a whole package, and that's a contribution in itself, but it's not completely revolutionary.

Over the last 3 years, I've seen presentations from pretty much every major vendor about user-space dynamic instrumentation. DTrace's contributions are commercially available and supported integration of kernel- and user-space tools, and its nice language that ties the pieces together. The language may not be perfect, but it's a damned good interface to the toolkit. These are nice, evolutionary steps. The press releases make them sound like huge, immense leaps. That really doesn't go over so well.

And note that much of the research has targetted architectures that are nice to dynamic rewritting, like SPARC and IA32. Both have OSes with well-defined runtime semantics, and both have isns which lend themselves to rewriting. Compare that to IA64... Even Intel's research tool (PIN) has pretty much dropped IA64 support. heh.

Also, I really like how you describe hooking into relevant counters as a debugging technique. It's a great trick and really justifies putting "performance" counting into everyday code. You may not use all the statistics, but the counting gives you places for run-time hooks.

Posted by oy vey on August 23, 2004 at 01:53 AM PDT #

Okay, this comment is either willfully ignorant or deliberately deceptive:
However, many existing tools do use dynamic instrumentation. The Paradyn toolkit has existed in the research world for, what, 9+ years? (first citation I know is 95) And Alpha-based systems pioneered commercially available dynamic instrumentation and rewriting in user-space (ATOM, etc.). There is an article in OSDI (99, I think) about dynamically instrumenting kernel code. You cited almost none of the existing work on individual pieces, which is why some people dismiss DTrace as hype.
You are completely wrong here. In our USENIX paper, we cite everything that you're referring to, including the KernInst paper (that's the OSDI '99 paper). I take great pride in the fact that our Related Work section is so well researched; not only have we cited well-known work like Paradyn, we have also cited its long-since-forgotten derivatives like MDL. Not only have we cited instrumentation work like ATOM, we've also cited work that is less clearly related like VINO, SPIN and AspectJ. Not only have we cited ISA-specifc systems like Purify, we've also cited language-specific systems like UFO. And in our SysAdmin interview, we explicitly cited Miller's Paradyn as a source of inspiration.

Now, all of this said, there are substantial differences between DTrace and the work that has come before. For example, none of these instrumentation frameworks has thread-local variables, associative arrays, speculative tracing or aggregations -- all features that are critical for DTrace. You think these are "syntactic sugar"? Think again -- or better yet, try to implement them yourself for an arbitrary-context in-kernel instrumentation framework. And of course the most relevant difference between DTrace and these other systems: DTrace is designed to be used in production systems. Most of the related work that we cited is rife with edge conditions where misuse induces fatal applicaiton or system failure. And again, this isn't just "good implementation" -- it's a consequence of a deliberate architectural constraint.

Posted by Bryan Cantrill on August 23, 2004 at 02:57 AM PDT #

I apologize for missing those references in your USENIX paper. I honestly don't know how I skipped them, and I apologize for the misleading comment. I think my ATOM reference was skewed by the later semi-dynamic work that became DCPI, and I had forgotten Miller's name, so I missed the string "Paradyn". You did fail to mention using tools like valgrind, qemu, or bochs for dynamic instrumentation.

However, having used Paradyn and Perl together, I know I've used associative arrays, and I know I used it them to aggregate call traces with performance acceptible to that application. I do that quite regularly in user-space without Paradyn, tracing problems with MPI code: use the profiling hooks to get backtraces, stuff them into trees / assoc arrays, then aggregate the info back down to summarize what's going on in 500+ processes. The aggregation consists of summarizing MPI target buffers and offsets by name on each proc, etc. Here your contributions are to give it a wonderful interface and integrate it into a production kernel. A very little bit more work, and you'll have a rockin' cluster debugger.

"Speculative tracing" is very similar to writing a log into a circular buffer in embedded systems. When something vaguely interesting happens, start logging. If something bad happens, save the log. Again, one of your major contributions is to give this a great interface and to tie it to other features. And adding extensions with thread-local variables certainly isn't a new idea. How many security research projects add funky key collections, or networking add thread-level memory mappings, etc? I'm not saying it isn't hard, just that it isn't a new idea.

Many things DTrace implements others have implemented as one-offs. Your main contribution is raising it to a generally useful level as I've said again and again. Some of the folks who seem to be talking down about DTrace have implemented and used these one-offs and get a bit insulted when you <em>appear</em> to state that these things have never been done before.

And I'm not discounting your work. Implementing even the one-off systems is painful; making it production quality is very impressive. Your engineering work is remarkable. I'm evaluating the scientific and research aspects, the "new and noteworthy" ideas. I primarily work in a different area, one where you have to justify any systems-level work with new ideas no matter how interesting the implementation is.

My view is still that this is a wonderful, evolutionary step for these tools and the ideas behind them. The underlying technology <em>pieces</em> have existed for quite a while, but you brought it up a level by combining the pieces and making it an everyday utility. That's important, but you appear to think it's not as important as reinventing the individual pieces.

If you take credit more for combining the pieces (and the engineering work), I don't think you'd have as many detractors or be confusing as many people with what's "new". All the arguments I've seen (which generally don't involve you, just citing your work) have been "this aspect is new", "no it isn't", "yes it is". Blah. Combining all the pieces and making them accessible is undeniably new. I don't understand why you keep pointing out <em>pieces</em> when the whole is the interesting part. And yes, if you look at all of your statements, you do mention the whole. But the initial press release sure made it sound like all-new, Sun-only pixie dust...

Posted by oy vey on August 23, 2004 at 04:43 AM PDT #

It's amazing to me that you miss an entire section when you read the paper, attack us for not having said section, and then still have the gall to tell us that "many things DTrace implements others have implemented as one-offs." This is false, and if you would read our paper (note: not "skim" or "print" or "download", but actually read) you would understand why (for example) your experiences postprocessing Paradyn with Perl are not the same thing as having associative arrays and aggregations as fundamental primitives in an instrumentation framework. Why not? Because when they're built in, unwanted data is eliminated at the source, in the kernel, reducing the data stream tremendously -- potentially by a factor of the number of data points. This is incredibly important, and designing these features such that they scale linearly with CPUs and can be called from arbitrary context is a problem that no one has solved in an instrumentation framework before DTrace.

You write:

I primarily work in a different area
Yeah, no kidding. Given that this isn't your area of expertise, you should probably defer to those that do work in this area -- like those who reviewed and accepted our USENIX paper. (USENIX is a peer reviewed conference that accepted 12.8% of its submissions this year.) Please carefully read that paper, and should you disagree with the claims therein, consider writing a scholarly, non-anonymous response -- as an open letter to the USENIX Association, as a technical report or even as your own blog entry.

Posted by Bryan Cantrill on August 23, 2004 at 05:31 AM PDT #

Who said I was post-processing trace files with Perl? Perl can be embedded in an app / kernel / whatever. Nowadays, I'd probably use lua; Perl's gotten a bit bloated. And exactly which section did I miss? Yes, I missed a citation (Paradyn's name appeared in the bib, not in the text), and I confused ATOM with DCPI. Sorry. I had assumed the "dynamic instrumentation" citations were for more recent tools which you didn't cite. And my parallel example has to aggregate at the nodes and during execution. There's no sane way to output a trace file from a 500 cpu job running for hours. Note that no one has tried to compete with Vampir; no one <em>wants</em> to deal with huge trace files.

Also, I'm not attacking. I'm trying to explain why you're getting some "this is hype" reactions. You jump up and down declaring that everything you've implemented is brand new. That's almost never true, no matter who you are. There's nothing wrong with that. You've clearly created a good tool and solved integration issues. Aggregating data at the source and then again through the network is a known technique in parallel and distributed systems (see the sensor network literature). Your integration of the pieces is new, which I've said in every post.

Now this is an attack: You need to read what people are trying to say, not what you're trying to hear. Get a thicker skin; you'll need it for OSDI-style pubs. USENIX has a different, more applied focus.

Posted by oy vey on August 23, 2004 at 06:47 AM PDT #

Perl can be embedded in a kernel? To be executed at arbitrary context? Tell you what: cite an example, and I'll explain to you how the example you provide doesn't allow for Perl to be executed in an arbitrary context. Perl has not been implemented in a lock-free fashion (required for arbitrary context execution), and (by its design) requires access to system services that are unavailable in arbitrary kernel context. (And Perl is certainly not designed to be an instrumentation language with a safety constraint.)

I love this self-contradicting paragraph of yours:

Also, I'm not attacking. I'm trying to explain why you're getting some "this is hype" reactions. You jump up and down declaring that everything you've implemented is brand new. That's almost never true, no matter who you are.
I'm actually not jumping "up and down declaring that everything [we've] done is brand new." Rather, I'm only insisting that we have done some things that are brand new -- and I have cited these contributions (ad nauseum) both here and in the paper.

And finally, your closer:

Now this is an attack: You need to read what people are trying to say, not what you're trying to hear. Get a thicker skin; you'll need it for OSDI-style pubs. USENIX has a different, more applied focus.
I find this richly ironic -- given that your skin is apparently so paper-thin that you can't even attach your name to your criticisms...

Posted by Bryan Cantrill on August 23, 2004 at 07:52 AM PDT #

Brian, maybe a "cheat sheet" type page is needed for some of these common "beginner" type questions. Eg something vaguely like:

Intro: Dtrace allows you to deeply understand how Solaris and application software interact, and what they are doing. With the D scripting language, it allows you to find out what is going on and why. This can be done at any time, anywhere on any Solaris 10 system. The end result will faster, more reliable and more predictable systems, and accomplished in far less time than with alternatives.

Dtrace is of use to the following people:

  • Software developers to debug software much faster, and optimise it further.
  • Software integrators to more quickly and reliably solve complex problems involved with the interaction of seperate applications
  • System administrators to more quickly and reliably determine configuration and operational issues
  • Data-center administrators to achieve higher application availability and performance, with fewer resources and less stressed staff.

An expert understanding of operating systems is not a requirement to use Dtrace. Additional resources include:

  • The complete guide to using Dtrace, including many examples, can be found here
  • Some additional D scripts can be found here
  • A technical description of how Dtrace works and how it compares to alternatives can be found here. This paper was presented at Usenix and has been peer-reviewed.
  • The Dtrace home page, with more resources and forums can be found here

What Dtrace is not:

  • Dtrace does not superceed all other observability and debugging tools. Higher level tools are best used to find if something might be wrong, and then Dtrace can be used to find the root cause.
  • Dtrace is not fully available to all users. For security reasons, users need special privaliges to use all the features. See the security chapter in the guide.
Dtrace's main features and benefies, and how competiting tools compare: <table border=2> <tr valign=top><th rowspan=2>Feature</th><th colspan=3>Supported by</th><th rowspan=2>Benefit</th></tr> <tr><th>Dtrace</th><th>Dprobes</th><th>LTT</th></tr> <tr><td>Dynamic tracing</td><td>Yes</td><td>Yes</td><td>No</td><td>With static tracing, what can be probed is limited in advanced. Dynamic tracing does not require special software support</td></tr> <tr><td>100% reliable</td><td>Yes</td><td>No</td><td>?</td><td>If the tracing system can cause the program or OS to crash then it can't be used in production systems. This saves development time as well.</td></tr> <tr><td>Trace user and kernel space at same time</td><td></td><td></td><td></td><td>...</td></tr> <tr><td>Trace arbitary contexts</td><td></td><td></td><td></td><td>...</td></tr> <tr><td>Etc...</td><td></td><td></td><td></td><td>...</td></tr> </table>

Mini FAQ: <dl> <dt>Can Dtrace be used on Solaris 9 or earlier?</dt> <dd>No, Dtrace is only available starting with Solaris 10.</dd> <dt>Will Dtrace be backported to Solaris 9?</dt> <dd>No, the development effort would be too much.</dd> <dt>Can Dtrace debug a cluster?</dt> <dd>Dtrace can debug individual nodes, not clusters as a whole</dd> <dt>Is Dtrace available on Solaris x86</dt> <dd>Yes</dd> <dt>Sun is open-sourcing Solaris - will Dtrace be ported to Linux? Will the license allow it?</dt> <dd>Sun will not port Dtrace themselves to other operating systems. What the license will allow and prevent is currently unknown since it hasn't been finalised.</dd> </dl>

Posted by Chris Rijk on August 23, 2004 at 09:24 PM PDT #

Hey Chris,

Thanks for doing that -- that's a great way of organizing the content. We tried to make the BigAdmin page be that "cheat sheet", but I think you've found a better way to present some of this information. We'll work on getting the BigAdmin page looking more like this; thanks again!

Posted by Bryan Cantrill on August 24, 2004 at 04:29 AM PDT #

Glad you liked it.

btw, I wrote it more for an "intro" page more than an alternative to current home page. Though this is actually a bit of a classic problem: should your home page be optimised towards new visitors, or regulars, or do you try to fit in both? For example, I feel that www.sun.com as a whole is not as friendly as it could be for new visitors, who may know little about Sun. www.sun.com is much better than average though.

Sometimes I think the best thing (in general) would be to have a home page optimised to new visitors, with a "main page" or whatever you want to call it, that regulars can bookmark, and has the most relevant info for them. Sun's main site does provide "My Sun", though that's a bit too much work for sub-parts of the site.

Hmm, if all sub-parts of the site (like Dtrace) provided "feeds" that My Sun could use, that could make things interesting. (I notice on some pages on MySun it says iPlanet Portal Server 3.0 - perhaps it's a bit neglected...?)

Posted by Chris Rijk on August 24, 2004 at 08:39 PM PDT #

Hi, First, thanks for a really interesting tool, from the systems administration perspective in my case (at least for my current job ;-) Pity we're still on Solaris 8 and that management flirt with migrating to Wintel... Just a comment concerning the comparison of DTrace with tools on other platforms: DTrace seems to be a great tool for finding the root cause of problems on a production server, which is my main worry at the moment. However, if there are good tools available on other platforms, such as Linux discussed here, I'm more than happy to get hold and learn to use them. I've also got servers running Linux... Instead of bashing each other, maybe someone could give some constructive comparison between the tools mentioned in the linux.kernel thread, notably kprobes, valgrind and oprofile, and DTrace ; in particular since those three tools were not mentioned in the Usenix paper on DTrace? Any reason why they were not considered as related work? My main question is how to combine the use of LTT, DProbes, KProbes, OProfile, and whatever else is needed, to get similar functionality to DTrace. It'll probably turn out that it is not that easy? Best regards, Frank Olsen

Posted by Frank Olsen on August 29, 2004 at 10:51 PM PDT #

Hi, First, thanks for a really interesting tool, from the systems administration perspective in my case (at least for my current job ;-) Pity we're still on Solaris 8 and that management flirt with migrating to Wintel... Just a comment concerning the comparison of DTrace with tools on other platforms: DTrace seems to be a great tool for finding the root cause of problems on a production server, which is my main worry at the moment. However, if there are good tools available on other platforms, such as Linux discussed here, I'm more than happy to get hold and learn to use them. I've also got servers running Linux... Instead of bashing each other, maybe someone could give some constructive comparison between the tools mentioned in the linux.kernel thread, notably kprobes, valgrind and oprofile, and DTrace ; in particular since those three tools were not mentioned in the Usenix paper on DTrace? Any reason why they were not considered as related work? My main question is how to combine the use of LTT, DProbes, KProbes, OProfile, and whatever else is needed, to get similar functionality to DTrace. It'll probably turn out that it is not that easy? Best regards, Frank Olsen

Posted by Frank Olsen on August 30, 2004 at 01:50 AM PDT #

Hi,

My excuses for the formatting of the above post. Seems that I should have used the preview after all.

Best regards,

Frank Olsen

Posted by Frank Olsen on August 30, 2004 at 01:54 AM PDT #

Hi,

My excuses for the formatting of the above post. Seems that I should have used the preview after all.

Best regards,

Frank Olsen

Posted by Frank Olsen on August 30, 2004 at 01:55 AM PDT #

Post a Comment:
Comments are closed for this entry.
About

bmc

Search

Top Tags
Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today