Saturday May 31, 2008

A Tribute to Jim Gray

Like several hundred others, I spent today at Berkeley at a tribute to honor Jim Gray. While many there today had collaborated with Jim in some capacity, my own connection to him is tertiary at best: I currently serve on the Editorial Advisory Board of ACM Queue, a capacity in which Jim also served up until his disappearance. And while I never met Jim, his presence is still very much felt on the Queue Board -- and I went today as much to learn about as to honor the man and computer scientist who still looms so large.

I came away inspired not only by what Jim had done and how he had done it, but also by the many in whom Jim had seen and developed a kind of greatness. It's rare for a gifted mind to be emotionally engaged with others (they are often "anti-human" in the words of one presenter), but practically unique for an intellect of Jim's caliber to have his ability for intense, collaborative engagement with so many others. Indeed, I found myself wondering if Jim didn't belong in the pantheon of Erdos and Feynman -- two other larger-than-life figures that shared Jim's zeal for problems and affection for those solving them.

On a slightly more concrete level, I was impressed that not only was Jim a big, adventurous thinker, but that he was one who remained moored to reality. This delicate balance pervades his life's work: The System R folks talked about how in a year he not only wrote 9 papers, but cut 10,000 lines of code. The Wisconsin database folks talked about how he who had pioneered so much in transaction processing also pioneered the database benchmarks, developing the precursor for what would become the Transaction Processing Council in a classic Datamation paper. The Tandem folks talked about how he reached beyond research and development to engage with the field and with customers to figure out (and publish!) why systems actually failed -- which was quite a reach for a company that famously dubbed every product "NonStop". And the Microsoft folks talked about his driving vision to put a large database on the web that people would actually use, leading him to develop TerraServer, and to inspire and assist subsequent systems like the Sloan Digital Sky Survey and Microsoft Research's WorldWide Telescope (which gave an incredible demo, by the way) -- systems that solved hard, abstract problems but that also delivered appreciable concrete results.

All along, Jim remained committed to big, future-looking ideas, while still insisting on designing and implementing actual systems in the present. We in computer science do not strike that balance often enough -- we are too often either detached from reality, or drowning in it -- so Jim's ability to not only strike that balance but to inspire it in others was a tremendous asset to our discipline. Jim, you are sorely missed -- even (and perhaps especially) by those of us who never met you...

Sunday Mar 16, 2008

dtrace.conf(08)

dtrace.conf(08) was this past Friday, and (no surprise, given the attendees), it ended up being an incredible (un)conference. DTrace is able to cut across vertical boundaries in the software stack (taking you from say, PHP through to the kernel), and it is not surprising that this was also reflected in our conference, which ran from the darkness of new depths (namely, Andrew Gardner from the University of Arizona on using DTrace on dynamically reconfigurable FPGAs) through the more familiar deep of virtual machines (VMware, Xen and Zones) and operating system kernels (Solaris, OS X, BSD, QNX), into the rolling hills of databases (PostgreSQL) and craggy peaks of the Java virtual machine, and finally to the hypoxic heights of environments like Erlang and AIR. And as it is the nature of DTrace to reflect not the flowery theory of a system but rather its brutish reality, it was not at all surprising (but refreshing nonetheless) that nearly every presentation had a demo to go with it. Of these, it was particularly interesting that several were on actual production machines -- and that several others were on the presenter's Mac laptop. (I know I have been over the business case of the Leopard port of DTrace before, but these demos served to once again remind of the new vistas that have been opened up by having DTrace on such a popular development platform -- and how the open sourcing of DTrace has indisputably been in Sun's best business interest.)

When we opened in the morning, I claimed that the room had as much software talent as has ever been assembled in 70 people; after seeing the day's presentations and the many discussions that they spawned, I would double down on that claim in an instant. And while the raw pulling power of the room was awe-inspiring, the highlight for me was looking around as everyone was eating dinner: despite many companies having sent more than one participant, no dinner conversation seemed to have two people from the same company -- or even from the same layer of the stack. It almost brought a tear to the eye to see such interaction among such superlative software talent from such disjoint domains, and my dinner partner captured the zeitgeist precisely: "it's like a commune in here."

Anyway, I had a hell of a time, and judging by their blog entries, Keith, Theo, Jarod and Stephen did too. So thanks everyone for coming, and a huge thank you to our sponsor Forsythe -- and here's looking forward to dtrace.conf(09)!

Tuesday Dec 18, 2007

Announcing dtrace.conf

We on Team DTrace have realized that with so much going on in the world of DTrace -- ports to new systems, providers for new languages and development of new DTrace-based tools -- the time is long overdue for a DTrace summit of sorts. So I'm very pleased to announce our first-ever DTrace (un)conference: dtrace.conf(08), to be held in San Francisco on March 14, 2008. One of the most gratifying aspects of DTrace is its community: because DTrace operates both across abstraction boundaries and to great depth, it naturally attracts technologists who do the same -- systems generalists who couple an ability to think abstractly about the system with a thirst for the concrete data necessary to better understand it. So if nothing else, dtrace.conf should provide for thought-provoking and wide-ranging conversation!


The conference wiki has more details; hope to see you at dtrace.conf in San Francisco in March!

Tuesday Dec 04, 2007

Boom/bust cycles

Growing up, I was sensitive to the boom/bust cycles endemic in particular industries: my grandfather was a petroleum engineer, and he saw the largest project of his career cancelled during the oil bust in the mid-1980s.[1] And growing up in Colorado, I saw not only the destruction wrought by the oil bust (1987 was very bleak in Denver), but also the long boom/bust history in the mining industries -- unavoidable in towns like Leadville. (Indeed, it was only after going to the East Coast for university that I came to appreciate that school children in the rest of the country don't learn about the Silver Panic of 1893 in such graphic detail.)


So when, in the early 1990s, I realized that my life's calling was in software, I felt a sense of relief: here was an industry that was at last not shackled to the fickle Earth -- an industry that was surely "bust-proof" at some level. Ha! As I learned (painfully, like most everyone else) in the Dot-Com boom and subsequent bust, booms and busts are -- if anything -- even more endemic in our industry. Companies grow from nothing to spectacular heights in virtually no time at all -- and can crash back into nothingness just as quickly.


I bring up all of this, because if you haven't seen it, this video is absolutely brilliant, capturing these endemic cycles perfectly (and hilariously), and imparting a surprising amount of wisdom besides.


I'll still take our boom and bust cycles over those in other industries: there are, after all, still not quite yet software "ghost towns" -- however close the old Excite@Home building on the 101 was getting before it was finally leased!



[1]
If anyone is looking for an unspeakably large quantity of technical documentation on the (cancelled) ARAMCO al-Qasim refinery project, I have it waiting for your perusal!

Sunday Nov 11, 2007

On Dreaming in Code

As I noted previously, I recently gave a Tech Talk at Google on DTrace. When I gave the talk, I was in the middle of reading Scott Rosenberg's Dreaming in Code, and (for whatever poorly thought-out reason) I elected to use my dissatisfaction with the book as an entree into DTrace. I think that my thinking here was to use what I view to be Rosenberg's limited understanding of software as a segue into the more general difficulty of understanding running software -- with that of course being the problem that DTrace was designed to solve.

However tortured that path of reasoning may sound now, it was much worse in the actual presentation -- and in terms of Dreaming in Code, my introduction comes across as a kind of gangland slaying of Rosenberg's work. When I saw the video, I just cringed, noted with relief that at least the butchery was finished by five minutes in, and hoped that viewers would remember the meat of the presentation rather than the sloppy hors d'oeuvres.

I was stupidly naive, of course -- for as soon as so much as one blogging observer noted the connection between my presentation and the book, Google-alerted egosurfing would no doubt take over and I'd be completely busted. And that is more or less exactly what happened, with Rosenberg himself calling me out on my introduction, complaining (rightly) that I had slagged his book without providing much substance. Rosenberg was particularly perplexed because he felt that he and I are concluding essentially the same thing. But this is not quite the case: the conclusion (if not mantra) of his book is that "software is hard", while the point I was making in the Google talk is not so much that developing software is hard, but rather that software itself is wholly different -- that it is (uniquely) a confluence between information and machine. (And, in particular, that software poses unique observability challenges.)

This sounds like a really pedantic difference -- especially given that Rosenberg (in both book and blog entry) does make some attempt to show the uniqueness of software -- but the difference is significant: instead of beginning with software's information/machine duality, and from there exploring the difficulty of developing it, Rosenberg's work has at its core the difficulty itself. That is, Rosenberg uses the difficulty of developing software as the lens to understand all of software. And amazingly enough, if you insist on looking at the world through a bucket of shit, the world starts to look an awful lot like a bucket of shit: by the end of the book, Rosenberg has left the lay reader with the sense that we are careering towards some sort of software heat-death, after which meetings-about-meetings and stickies-on-whiteboards will prevent all future progress.

It's not clear if this is the course that Rosenberg intended to set at the outset; in his Author's Note, he gives us his initial bearings:

Why is good software so hard to make? Since no one seems to have a definitive answer even now, at the start of the twenty-first century, fifty years deep into the computer era, I offer by way of exploration the tale of the making of one piece of software.

The problem is that the "tale" that he tells is that of the OSAF's Chandler, a project plagued by metastasized confusion. In fact, the project is such a wreck that it gives rise to a natural question: did Rosenberg pick a doomed project because he was convinced at the outset that developing software was impossible, and he wanted to be sure to write about a project that wouldn't hang the jury? Or did his views of the impossibility of developing software come about as a result of his being trapped on such a reeking vessel? On the one hand, it seems unimaginable that Rosenberg would deliberately book his passage on the Mobro 4000, but on the other, it's hard to imagine how a careful search would have yielded a better candidate to show just how bad software development can get: PC-era zillionaire with too many ideas funds on-the-beach programmers to change the world with software that has no revenue model. Yikes -- call me when you ship something...

By the middle of the book it is clear that the Chandler garbage barge is adrift and listing, and Rosenberg, who badly wants the fate of Chandler to be a natural consequence of software and not merely a tale of bad ideas and poor execution, begins to mark time by trying to find the most general possible reasons for this failure. And it is this quest that led to my claim in the Google video that he was "hoodwinked by every long-known crank" in software, a claim that Rosenberg objects to, noting that in the video I provided just one example (Alan Kay). But this claim I stand by as made -- and I further claim that the most important contribution of Dreaming in Code may be its unparalleled Tour de Crank, including not just the Crank Holy Trinity of Minsky/Kurzweil/Joy, but many of the lesser known crazy relations that we in computer science have carefully kept locked in the cellar. Now, to be fair, Rosenberg often dismisses them (and Kapor put himself squarely in the anti-Kurzweil camp with his 2029 bet), but he dismisses them not nearly often enough or swiftly enough -- and by just presenting them, he grants many of them more than their due in terms of legitimacy.

So how would a change in Rosenberg's perspective have yielded a different work? For one, Rosenberg misses what is perhaps the most profound ramification of the information/machine duality: that software -- unlike everything else that we build -- can achieve a timeless and absolute perfection. To be fair, Rosenberg comes within sight of this truth -- but only for a moment: on page 336 he opens a section with "When software is, one way or another, done, it can have astonishing longevity." But just as quickly as hopes are raised, they are dashed to the ground again; he follows up that promising sentence with "Though it may take forever to build a good program, a good program will sometimes last almost as long." Damn -- so close, yet so far away. If "almost as long" were replaced with "in perpetuity", he might have been struck by the larger fallacies in his own reasoning around the intractable fallibility of software.

And software's ability to achieve perfection is indeed striking, for it makes software more like math than like traditional engineering domains -- after all, long after the Lighthouse at Alexandria crumbled, Euclid's greatest common divisor algorithm is showing no signs of wearing out. Why is this important? Because once software achieves something approximating perfection (and a surprising amount of it does), it sediments into the information infrastructure: the abstractions defined by the software become the bedrock that future generations may build upon. In this way, each generation of software engineer operates at a higher level of abstraction than the one that came before it, exerting less effort to do more. These sedimenting abstractions also (perhaps paradoxically) allow new dimensions of innovation deeper in the stack: with the abstractions defined and the constraints established, one can innovate underneath them -- provided one can do so in a way that is absolutely reliable (after all, this is to be bedrock), and with a sufficient improvement in terms of economics to merit the risk. Innovating in such a way poses hard problems that demand a much more disciplined and creative engineer than the software sludge that typified the PC revolution, so if Rosenberg had wanted to find software engineers who know how to deliver rock-solid highly-innovative software, he should have gone to those software engineers who provide the software bedrock: the databases, the operating systems, the virtual machines -- and increasingly the software that is made available as a service. There he still would have found problems to be sure (software still is, after all, hard), but he would have come away with a much more nuanced (and more accurate) view of both the state-of-the-art of software development -- and of the future of software itself.

Thursday Nov 08, 2007

DTrace on QNX!

There are moments in anyone's life that seem mundane at the time, but become pivotal in retrospect: that chance meeting of your spouse, or the job you applied for on a lark, or the inspirational course that you just stumbled upon.

For me, one of those moments was in late December, 1993. I was a sophomore in college, waiting in Chicago O'Hare for a connecting flight home to Denver for the winter break, reading an issue of the late Byte magazine dedicated to operating systems. At the time, I had just completed what would prove to be the most intense course of my college career -- Brown's (in)famous CS169 -- and the magazine contained an article on a very interesting microkernel-based operating system called QNX. The article, written by the late, great Dan Hildebrand, was exciting: we had learned about microkernels in CS169 (indeed, we had implemented one) and here was one running in the wild, solving real, hard problems. I was inspired. When I returned to school, I cold e-mailed Dan and asked about the possibilites of summer employment. Much to my surprise, he responded! When I returned to school in January, Dan and QNX co-founder Dan Dodge gave me an intense phone interview -- and then offered me work for the summer. I worked at QNX (based in Kanata, outside of Ottawa) that summer, and then came back the next. While I didn't end up working for QNX after graduation, I have always thought highly of the company, the people -- and especially the technology.

So you can imagine that for me, it's a unique pleasure -- and an an odd sort of homecoming -- to announce that DTrace is being ported to QNX. In my DTrace talk at Google I made passing reference (at about 1:11:53) to one "other system" that DTrace was being ported to, but that I was not at liberty to mention which. Some speculated that this "surely" must be Windows -- so hopefully the fact that it was QNX will serve to remind that it's a big heterogeneous world out there. So, to Colin Burgess and Thomas Fletcher at QNX: congratulations on your fine work. And to QNX'ers: welcome to the DTrace community!

Now, with the drinks poured and the toasts made, I must confess that this is also a moment colored by some personal sadness, for I would love nothing more right now than to call up danh, chat about the exciting prospects for DTrace on QNX, and reminisce about the many conversations over our shared cube wall so long ago. (And if nothing else, I would kid him about his summer-of-1994 enthusiasm for Merced!) Dan, you are sorely missed...

Monday Oct 29, 2007

DTrace, Leopard, and the business of open source

If you haven't seen it, DTrace is now shipping in Mac OS X Leopard. This is very exciting for us here at Sun, but one could be forgiven for asking an obvious question: why? How is having our technology in Leopard (which, if Ars Technica is to be believed, is "perhaps the most significant change in the Leopard kernel") helping Sun? More bluntly, haven't we in Sun handed Apple a piece of technology that we could have sold them instead? The answer to these questions -- which are very similar in spirit to the questions that were asked over and over again internally as we prepared to open source Solaris -- is that they are simply the wrong questions.

The thrust of the business questions around open source should not be "how does this directly help me?" or "can't I sell them something instead?" but rather "how much does it cost me?" and "does it hurt me?" Why must one shift the bias of the questions? Because open source often helps in much more profound (and unanticipated) ways than just this quarter's numbers; one must look at open source as long term strategy rather than short term tactics. And as for the intense (and natural) desire to sell a technology instead of giving away the source code, one has to understand that the choice is not between "I give a customer my technology" and "I sell a customer my technology", but rather between "a customer that I never would have had uses my technology" and "a customer that I never would have had uses someone else's technology." When one thinks of open source in this way, the business case becomes much clearer -- but this still may be a bit abstract, so let's apply these questions to the specific case of DTrace in Leopard...

The first question is "how much did it cost Sun to get DTrace on Leopard?" The answer to this first question is that it cost Sun just about nothing. And not metaphorical nothing -- I'm talking actual, literal nothing: Adam, Mike and I had essentially one meeting with the Apple folks, answering some questions that we would have answered for anyone anyway. But answering questions doesn't ship product; how could the presence of our software in another product cost us nothing? This is possible because of that most beautiful property of software: it has no variable cost; the only meaningful costs associated with software are fixed costs, and those costs were all borne by Apple. Indeed, it has cost Sun more money in terms of my time to blog how this didn't cost anything to Sun than it did in fact cost Sun in the first place...

With that question answered, the second question is "does the presence of DTrace on Leopard hurt Sun?" The answer is that it's very hard to come up with a situation whereby this hurts Sun: one would have to invent a fictitious customer who is happily buying Sun servers and support -- but only because they can't get DTrace on their beloved Mac OS X. In fact, this mythical customer apparently hates Sun (but paradoxically loves DTrace?) so much that they're willing to throw out all of their Sun and Solaris investment over a single technology -- and one that is present in both systems no less. Even leaving aside that Solaris and Mac OS X are not direct competitors, this just doesn't add up -- or at least, it adds up to such a strange, irrational customer that you'll find them in the ones and twos, not the thousands or millions.

But haven't we lost some competitive advantage to Apple? Doesn't that hurt Sun? The answer, again, is no. If you love DTrace (and again, that must be presupposed in the question -- if DTrace means nothing to you, then its presence in Mac OS X also means nothing to you), then you are that much more likely to look at (and embrace) other perhaps less ballyhooed Solaris
technologies like SMF, FMA, Zones, least-privilege, etc. That is, the kind of technologist who appreciates DTrace is also likely to appreciate the competitive advantages of Solaris that run far, far beyond merely DTrace -- and that appreciation is not likely to be changed by the presence of DTrace in another system.

Okay, so this doesn't cost Sun anything, and it doesn't hurt Sun. Once one accepts that, one is open to a much more interesting and meaningful question: namely, does this help Sun? Does it help Sun to have our technology -- especially a revolutionary one -- present in other systems? The answer is "you bet!" There are of course some general, abstract ways that it helps -- it grows our DTrace community, it creates larger markets for our partners and ISVs that wish to offer DTrace-based solutions and services, etc. But there are also more specific, concrete ways: for example, how could it not help Solaris to have Ruby developers (the vast majority of whom develop on Mac OS X) become accustomed to using DTrace to debug their Rails app? Today, Rails apps are generally developed on Mac OS X and deployed on Linux -- but one can make a very, very plausible argument that getting Rails developers hooked on DTrace on the development side could well the change the dynamics on the deployment side. (After all, DTrace + Leopard + Ruby-on-Rails is crazy delicious!) This all serves as an object lesson of how unanticipatable the benefits of open source can be: despite extensive war-gaming, no one at Sun anticipated that open sourcing DTrace would allow it to be used to Sun's advantage on a hot web development platform running on a hip development system, neither of which originated at Sun.

And the DTrace/Leopard/Ruby triumvirate points to a more profound change: the presence of DTrace in other systems assures that it transcends a company or its products -- that it moves beyond a mere a feature, and becomes a technological advance. As such, you can be sure that systems that lack DTrace will become increasingly unacceptable over time. DTrace's shift from product to technological advance -- just like the shifts in NFS or Java before it -- is decidedly and indisputably in Sun's interest, and indeed it embodies the value proposition of the open systems vision that started our shop in the first place. So here's to DTrace on Leopard, long may it reign!

Thursday Sep 06, 2007

DTrace on ONTAP?

As presumably many have seen, NetApp is suing Sun over ZFS. I was particularly disturbed to read Dave Hitz's account, whereby we supposedly contacted NetApp 18 months ago requesting that they pay us "lots of money" (his words) for infringing our patents. Now, as a Sun patent holder, reading this was quite a shock: I'm not enamored with the US patent system, but I have been willing to contribute to Sun's patent portfolio because we have always used them defensively. Had we somehow lost our collective way?


Now, I should say that there was something that smelled fishy about Dave's account from the start: I have always said that a major advantage of working for or doing business with Sun is that we're too disorganized to be evil. Being disorganized causes lots of problems, but actively doing evil isn't among them; approaching NetApp to extract gobs of money shows, if nothing else, a level of organization that we as a company are rarely able to summon. So if Dave's account were true, we had silently effected two changes at once: we had somehow become organized and evil!


Given this, I wasn't too surprised to learn that Dave's account isn't true. As Jonathan explains, NetApp first approached STK through a third party intermediary (boy, are they ever organized!) seeking to buy the STK patents. When we bought STK, we also bought the ongoing negotiations, which obviously, um, broke down.


Beyond clarifying how this went down, I think two other facts merit note: one is that NetApp has filed suit in East Texas, the infamous den of patent trolls. If you'll excuse my frankness for a moment, WTF? I mean, I'm not a lawyer, but we're both headquartered in California -- let's duke this out in California like grown-ups.


The second fact is that this is not the first time that NetApp has engaged in this kind of behavior: in 2003, shortly after buying the patents from bankrupt Auspex, NetApp went after BlueArc for infringing three of the patents that they had just bought. Importantly, NetApp lost on all counts -- even after an appeal. To me, the misrepresentation of how the suit came to be, coupled with the choice of venue and history show that NetApp -- despite their claptrap to the contrary -- would prefer to meet their competition in a court of law than in the marketplace.


In the spirit of offering constructive alternatives, how about porting DTrace to ONTAP? As I offered to IBM, we'll help you do it -- and thanks to the CDDL, you could do it without fear of infringing anyone's patents. After all, everyone deserves DTrace -- even patent trolls. ;)

Tuesday Aug 21, 2007

DTrace at Google

Recently, I gave a Tech Talk at Google on DTrace, the video for which is now online. If you've seen me present before and you don't want to suffer through the same tired anecdotes, arcane jokes, and disturbing rants, jump ahead to about 57:16 to see a demo of DTrace for Python -- and in particular John Levon's incredible Python ustack helper. You also might want to skip ahead to 1:10:46 for the Q&A -- any guesses what the first question was? (Hint: it was such an obvious question, both I and the room erupted with laughter.) Note that my rather candid answer to that question represents my opinion alone, and does not constitute legal advice -- and nor does it represent any sort of official position of Sun...

Thursday Aug 02, 2007

Caught on camera

Software is such an abstract domain that it is rare for a photograph to say much. But during his recent trip to OSCON, Adam snapped a photo that actually says quite a bit. For those who are curious, "jdub" is the nom de guerre of Jeff Waugh, who should really know better...

Saturday Jul 28, 2007

On the beauty in Beautiful Code

I finished Beautiful Code this week, and have been reflecting on the book and its development. In particular, I have thought back to some of the authors' discussion, in which some advocated a different title. Many of us were very strongly in favor of the working title of "Beautiful Code", and I weighed in with my specific views on the matter:

   Date: Tue, 19 Sep 2006 16:18:22 -0700
   From: Bryan Cantrill 
   To: [Beautiful Code Co-Authors]
   Subject: Re: [Beautifulcode] reminder: outlines due by end of September

   Probably pointless to pile on here, but I'm for "Beautiful Code", if
   only because it appropriately expresses the nature of our craft.  We
   suffer -- tremendously -- from a bias from traditional engineering that
   writing code is like digging a ditch:  that it is a mundane activity best
   left to day labor -- and certainly beneath the Gentleman Engineer.  This
   belief is profoundly wrong because software is not like a dam or a
   superhighway or a power plant:  in software, the blueprints _are_ the
   thing; the abstraction _is_ the machine.

   It is long past time that our discipline stood on its feet, stepped out
   from the shadow of sooty 19th century disciplines, and embraced our
   unique confluence between mathematics and engineering.  In short,
   "Beautiful Code" is long, long overdue.

Now, I don't disagree with my reasoning from last September (though I think that the Norma Rae-esque tone was probably taking it a bit too far), but having now read the book, I stand by the title for a very different reason: this book is so widely varied -- there are so many divergent ideas here -- that only the most subjective possible title could encompass them. That is, any term less subjective than "beautiful" would be willfully ignorant of the disagreements (if implicit) among the authors about what constitutes ideal software.

To give an idea of what I'm talking about, here is the breakdown of languages and their representations in chapters:

Language Chapters
C11
Java5
Scheme/Lisp3
C++2
Fortran2
Perl2
Python2
Ruby2
C#1
JavaScript1
Haskell1
VisualBASIC1

(I'm only counting each chapter once, so for the very few chapters that included two languages, I took whatever appeared more frequently. Also, note that some chapters were about the implementation of one language feature in a different language -- so for example, while there are two additonal chapters on Python, both pertain more to the C-based implementation of those features than to their actual design or use in Python.)

Now, one could argue (and it would be an interesting argument) about how much choice of language matters in software or, for that matter, in thought. And indeed, in some (if not many) of these chapters, the language of implementation is completely orthogonal to the idea being discussed. But I believe that language does ultimately affect thought, and it's hard to imagine how one could have a sense of beauty that is so uncritical as to equally accommodate all of these languages.

More specifically: read say, R. Kent Dybvig's chapter on the implementation of syntax-case in Scheme and William Otte and Douglas Schmidt's chapter on implementing a distributed logging service using an object-oriented C++ framework. It seems unlikely to me that one person will come away saying that both are beautiful to them. (And I'm not talking new-agey "beautiful to someone" kind of beautiful -- I'm talking the "I want to write code like that" kind of beautiful.) This is not meant to be a value judgement on either of these chapters -- just the observation that their definitions of beauty are (in my opinion, anyway) so wildly divergent as to be nearly mutually exclusive. And that's why the title is perfect: both of these chapters are beautiful to their authors, and we can come away saying "Hey, if it's beautiful to you, then great."

So I continue to strongly recommend Beautiful Code, but perhaps not in the way that O'Reilly might intend: you should read this book not because it's cover-to-cover perfection, but rather to hone your own sense of beauty. To that end, this is a book best read concurrently with one's peers: discussing (and arguing about) what is beautiful, what isn't beautiful, and why will help you discover and refine your own voice in your code. And doing this will enable you to write the most important code of all: code that is, if nothing else, beautiful to you.

Tuesday Jul 10, 2007

Beautiful Code

So my copy of Beautiful Code showed up last week. Although I am one of the (many) authors and I have thus had access to the entire book online for some time, I do all of my pleasure reading in venues that need the printed page (e.g. the J Church) and have therefore waited for the printed copy to start reading.

Although I have only read the first twelve chapters or so, it's already clear (and perhaps not at all surprising) that there are starkly different definitions of beauty here: the book's greatest strength -- and, frankly, its greatest weakness -- is that the chapters are so incredibly varied. For one chapter, beauty is a small and titilating act of recursion; for the next, it's that a massive and complicated integrated system could be delivered quickly and cheaply. (I might add that the definition of beauty in my own chapter draws something from both of these poles: that in software, the smallest and most devilish details can affect the system at the largest and most basic levels.)

If one can deal with the fact that the chapters are widely divergent, and that there is not even a token attempt to weave them together into a larger tapestry, this book (at least so far, anyway) is (if nothing else) exceptionally thought provoking; if Oprah were a code cranking propeller head, this would be the ideal choice for her book club.

Now in terms of some of my specific thoughts that have been provoked: as I mentioned, quite a few of my coauthors are enamored with the elegance of recursion. While I confess that I like writing a neatly recursive routine, I also find that I frequently end up having to unroll the recursion when I discover that I must deal with data structures that are bigger than I anticipated -- and that my beautiful code is resulting (or can result) in a stack overflow. (Indeed, I spent several unpleasant days last week doing exactly this when I discovered that pathologically bad input could cause blown stacks in some software that I'm working on.)

To take a concrete example, Brian Kernighan has a great chapter in Beautiful Code about some tight, crisp code written by Rob Pike to perform basic globbing. And the code is indeed beautiful. But it's also (at least in a way) busted: it overflows the stack on some categories of bad input. Admittedly, one is talking about very bad input here -- strings that consist of hundreds of thousands of stars in this case -- but this highlights exactly the problem I have with recursion: it leaves you with edge conditions that on the one hand really are edge conditions (deeply pathological input), but with a failure mode (a stack overflow) that's just too nasty to ignore.

Now, there are ways to deal with this. If one can stomach it, the simplest way to deal with this is to setup a sigaltstack and then siglongjmp out of a SIGSEGV/SIGBUS signal handler. You have to be very careful about doing this: the signal handler should look at the si_addr field in the siginfo and comparing it to the stack bounds to confirm that it's a stack overflow, lest it end up siglongjmp'ing out of a non-recursion induced SIGSEGV (which, needless to say, would make a bad problem much worse). While an alternative signal stack solution may sound hideous to some, at least the recursion doesn't have to go under the knife in this approach. If having a SIGSEGV handler to catch this condition feels uncomfortably brittle (as well it might), or if one's state cannot be neatly unwound after an arbitrary siglongjmp (as well it might not), the code will have to change: either a depth counter will have to be passed down and failure propagated when depth exceeds a reasonable maximum, or the recursion will have to be unrolled into iteration. For most aesthetic senses, none of these options is going to make the code more beautiful -- but they will make it indisputably more correct.

I was actually curious about where exactly the Pike/Kernighan code would blow up, so I threw together a little program that uses sigaltstack along with sigsetjmp/siglongjmp to binary search to find the shortest input that induces the failure. My program, which (naturally) includes the Pike/Kernighan code, is here.

Here are the results of running my program on a variety of Solaris platforms, with each number denoting the maximum string length that can be processed by the Pike/Kernighan code without the possibility of stack overflow.

x86 SPARC
32-bit 64-bit 32-bit 64-bit
Sun cc, unoptimized 403265 187225 77649 38821
gcc, unoptimized 327651 218429 69883 40315
Sun cc, optimized 327651 327645 174723 95303
gcc, optimized 582489 524227 149769 87367

As can be seen, there is a tremendous range here, even across just two different ISAs, two different data models and two different compilers: from 38,821 on 64-bit SPARC using Sun cc without optimization to 582,489 on 32-bit x86 using gcc with optimization -- an order of magnitude difference. So while recursion is a beautiful technique, it is one that ends up with the ugliest of implicit dependencies: on the CPU architecture, on the data model and on the compiler. And while recursion is still beautiful to me personally, it will always be a beauty that is more superficial than profound...

Tuesday Jul 03, 2007

DTrace on the Scoble Show

For those who didn't see it, Team DTrace was on the Scoble Show. As I mention at the end of the interview, this was the day after my younger son was born (if you look closely at my right wrist, you will note that I am still wearing my hospital bracelet in the interview). You can see the cigars that I offered at the end of the interview (and they were damn fine cigars, by the way) in a photo of Team DTrace that Scoble took afterwards. After the photo, we returned to our office and smoked the cigars -- and then had an unplanned conversation with the building management about never again smoking cigars in the office. (I responded that I was done having kids, so they had nothing to worry about.)

Finally, as for DTrace on the iPhone (to which we made brief reference in the interview): it is now our understanding that alas, DTrace is not on the iPhone -- Apple has apparently not yet ported DTrace to the ARM -- but that a DTrace port "may be" in the works. So the dream is alive!

Saturday Jun 23, 2007

Alexander Morgan Gaffikin Cantrill

Our family is very happy to welcome its newest addition: Alexander Morgan Gaffikin Cantrill, born on June 17th at 3:22pm, and weighing in at a whopping 9 pounds, 7 ounces. (And that was five days early!) It's amazing how much one forgets over nearly three years; I've found myself cramming on forgotten (if simple) ideas like burping and "tummy time". (But I can still swaddle like an all-pro!)

Sunday May 20, 2007

DTrace at Joyent

Joyent -- the originators of the Ruby 1.8.5 DTrace provider and Ruby 1.8.6 DTrace provider -- have set up a dedicated DTrace site, which Jason and company discuss in their latest podcast. If you are a Rails shop that cares about performance (and, yes, they very much exist), the resources at the Joyent page should become invaluable to you. And as long as we're on the topic, if you're in San Francisco this Tuesday (May 22nd), and you have fifteen clams burning a hole in your pocket, you might be interested in attending a panel that Jason and I will both be on: Ruby on Rails: To Scale or Not to Scale? (Once we have a few drinks in us, Jason and I also anticipate hosting a follow-up panel: "Jason Hoffman and Bryan Cantrill: Will the Real Doogie Howser Please Stand Up?")
About

bmc

Search

Top Tags
Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today