Wednesday Mar 16, 2011

Our Top Story Tonight

... this blog is still closed.

For anyone still following this blog, I'm still doing my main blogging over at my new blog, Synchronous Messages.

The Oracle acquisition of Sun closed over a year ago, but transition of various assets such as web properties is continuing. Over the next couple months, the content of active blogs on blogs.sun.com will be migrated to new infrastructure at blogs.oracle.com. Theoretically, links will be redirected, so permalinks to articles here should continue to work after the migration.

And yes, to hold a pen is still to be at war.

Wednesday Feb 04, 2009

I'm Closing This Blog

I've created a new blog, and I've decided not to post here anymore. It's not as if I posted here very often anyway. To find out why I haven't posted here very often, and why I've decided to switch to a new blog, please read here. See you at the new blog!

Tuesday Apr 08, 2008

Why I Don't Like Subversion

Jeff Atwood recently posted an article Setting up Subversion on Windows in his Coding Horror blog. I'm not very interested in setting up Subversion on Windows, but I am interested in a statement Jeff made toward the end of the article. This sparked quite a discussion in the comments. Jeff wrote:

I find Subversion to be an excellent, modern source control system. Any minor deficiencies it has (and there are a few, to be clear) are more than made up by its ubiquity, relative simplicity, and robust community support. In the interests of equal time, however, I should mention that some influential developers -- most notably Linus Torvalds -- hate Subversion and view it as an actual evil.

Unfortunately, in his talk at Google, Linus didn't really explain why he thought Subversion is evil. He said that CVS is the "devil" and, since svn is CVS done right, it's "pointless". So I guess it's safe to say that he believes svn is evil. (He further said that anyone who disagrees with him is "stupid and ugly" but that's just his rhetorical style, if you can call it that.)

Subversion was designed to replace CVS, and I think it does that quite well. While I don't think svn is evil, I do think it has some major deficiencies and some characteristics that make it unnecessarily complex, hard to use, and error-prone. So, while I can't speak for Linus, I think I know what he means when he criticizes svn. I won't go as far as Linus and say that using svn is worse than using no version control system (VCS) at all. If I had nothing else, I'd probably use svn. But fortunately I do have something else: I currently use Mercurial. But this article isn't about Mercurial, it's about svn.

Here's a catalog of what I think are Subversion's main problems, and why I'm much happier using Mercurial than I was when I was using Subversion.

1) Centralized vs. Distributed

There has been a lot of discussion about centralized vs. distributed version control systems (DVCS) so I won't repeat it here. Probably the best overview of DVCS is a paper [pdf] that Ollivier Robert presented at EuroBSDCon 2005.

Bill de hÓra also wrote about the benefits of DVCS in a response to Jeff's article.

For me, the biggest advantage of DVCS is that changes can be propagated among repos without passing through a central repository, while preserving changeset history. This creates more ways for developers to collaborate.

I should also clarify that the word "distributed" in this context doesn't imply "distributed" over a network or a "distributed" development team (i.e. a geographically dispersed team). A more descriptive term is "decentralized". Subversion has a network protocol but it isn't a DVCS. A DVCS is still useful for a team that works in the same office every day.

2) Safety During Merges

In svn, if you're committing to the trunk, you're required to merge and resolve conflicts before you can commit. The problem is that your changes aren't stored anywhere, so if you screw up during the merge, you might lose your changes.

A more complicated scenario is that you and a colleague might decide to collaborate on a feature. How do you combine your work? You could get your colleague to mail you a patch file, which you'd apply to your working copy. This is essentially doing a merge operation by hand, without the support of the VCS. If the patch file doesn't apply cleanly, you now have a working copy with your changes (possibly modified) and part of your colleague's changes, and you have to merge the rejected hunks by hand. Or worse, the patch might apply cleanly but some part of your code that used to work is now broken. Now you have to figure out what changed, without the benefit of your or your colleague's original versions.

How can you deal with problems like these? Well, you could undo the patch with patch -R. Or, you could take a snapshot of your entire working copy before starting the merge. You could also snapshot your diffs with svn diff. To roll back to your original version, you could svn revert your working copy and then apply the patch file you had generated with svn diff. Or, you could check out another working copy, apply your colleague's patches, and then cherry-pick them into your working copy. This is all doable, but you have to remember to do it, and it's a lot of manual work for you to do without the support of the VCS.

It's possible to do this in svn if you're willing to create new branches for individual developers. For the first scenario, you'd use svn copy to create a branch of the particular rev of the trunk you started with, then use svn switch to switch your working copy over to this new branch. Next, you'd commit your changes there, and then merge this branch onto the trunk. For the second scenario you'd create two branches, one for each developer. Each developer would switch his or her working copy to the respective branch, then merge one branch into the other (or onto a third branch), and finally merge the result onto the trunk.

While it's certainly possible to use this technique in svn, I don't know if anybody actually does. Maybe some expert users do. Even though branching is very lightweight in the repository, it seems to carry a pretty heavy conceptual overhead. Many developers I've talked to consider branching and merging to be a big deal. They also consider svn switch to be deep voodoo. When I was using svn, I didn't use branches, and I did my merges directly in my working copy. Personally I found that this added a lot of stress to the merging process.

What you really want is to be able to commit changesets and be able to pull in new changes without fear and to pass changes around and merge them at will. At any time you should be able to look at your original changeset, or your colleague's, and use this information to assist with the merge. Furthermore, if bugs managed to creep into the merged result, you should always be able go back into the history and look at one or the other changeset as they existed before the merge. (You can do all of these with Mercurial.)

3) History of Merges

Suppose you're working on a branch in svn, and now it's time to merge your changes back to the trunk. You have to merge the right range of revisions from your branch onto the trunk. It's fairly easy to find out the starting rev of your branch by using svn log --stop-on-copy.

If you don't specify the right revs, it's possible to miss some of the changes you made on the branch. If you fail to specify any revs at all, you'll merge your changes onto the trunk but undo changes that were made on the trunk after you branched. There's no warning when you do this, so you have to inspect the merged result carefully to ensure that it's correct.

If you're working on a branch for a long time, you might want to make sure that you don't diverge too far from the trunk. So, you merge over the changes from the trunk that have occurred since you created the branch. If your branch is long-lived you might want to do this a second time. When you do, you have to specify the range of revs to merge, starting from your previous merge instead of from the beginning of the branch. If you specify the beginning of the branch, you'll end up merging changes from the trunk that are already present, which will probably result in conflicts. Similarly, when you merge back to the trunk, you have to specify the rev range starting from when you last merged. If you don't, the merge will be wrong.

The point here is that svn requires you to keep track of revs at which you did merges and to specify them correctly in the merge command. It's also wise to inspect the merge results very carefully, since svn will silently create incorrect merges if you botch the merge command. A VCS should really keep track of what changesets you've made and which you've merged already instead of making you do this work. (Mercurial does this.)

I believe there is an svn extension that stores this information in properties. But this isn't part of core svn; you have to add and configure this yourself. I hear that a core merge-tracking feature is in the works for a future release of svn, but it's not in any released version as of this writing.

These two issues are probably what Linus is talking about when he says that merging is more important than branching. Merging in svn requires you to do your own bookkeeping, it's possible to lose uncommitted changes unless you do extra work, and it's easy to create mismerges. It's no wonder that people consider branching to be conceptually heavyweight. Creating new branches is easy; it's merging them back in that's the problem.

4) Namespace of Branches

I think branching is a pretty hard concept to begin with, independent of which VCS you're using. Indeed, at the part where Atwood describes setting up the branches and tags directories, he says "none of this means your developers will actually understand branching and merging" and refers to his previous article on branching.

In svn, a branch is a lightweight copy of a subtree from one location in the hierarchy to another. (A tag is just a special case of a branch that isn't intended to be modified. This discussion uses "branching" to refer to both branching and tagging.) Indeed, svn has no direct support for branching: it's just copying. The svn book puts it thus:

The Key Concepts Behind Branches

There are two important lessons that you should remember from this section. First, Subversion has no internal concept of a branch -- it only knows how to make copies. When you copy a directory, the resulting directory is only a "branch" because you attach that meaning to it. You may think of the directory differently, or treat it differently, but to Subversion it's just an ordinary directory that happens to carry some extra historical information. Second, because of this copy mechanism, Subversion's branches exist as normal filesystem directories in the repository. This is different from other version control systems, where branches are typically defined by adding extra-dimensional "labels" to collections of files.

I claim that having svn branches reside in the same namespace as directories of files is actually a misfeature which adds the potential for confusion to the already complex notion of branching.

In a filesystem hierarchy, directories (folders) are used for grouping related files and subdirectories. In svn, directories are also used for branching. The fact that svn treats them all the same is of no help to the user. In fact, you must treat them differently, otherwise things will get totally screwed up.

For example, I suspect that every novice svn user makes the same mistake -- exactly once -- of checking out the root of an svn repo. As evidence of this, the top of the repo browser for Subversion itself contains the following:

NOTE: Chances are pretty good that you don't actually want to checkout this directory. If you're looking for this project's primary branch of development, navigate instead into its trunk/ subdirectory, and follow the checkout instructions there.

So, how do you know whether a directory is a group of related files or a branch? The answer is, you don't; you just have to know. (Well, you can try to find out by running svn log --stop-on-copy but you still have to make some inferences.) There is a pretty strong convention in svn of having a TTB (trunk, tags, branches) structure at some level in the hierarchy. This is a pretty clear indication that these directories are branches (copies) instead of containers of related files. But if you have a repository that doesn't use the TTB structure, or has it in an unconventional location, both people and tools can become quite confused.

For example, I used to work on the phoneME project, which has its TTB structure replicated on a per-component basis a couple levels down the hierarchy. In addition, it has "super-tag" structures named /builds and /releases, which contain copies of components' tags. When we aimed the FishEye repository monitoring tool at the phoneME repository, it buried the svn server, and it reported that there were over 200,000,000 lines of code in the repository! (There are closer to 2,000,000 lines.) The reason was, of course, that FishEye was indexing all the branches, tags, builds, and releases directories as if they were independent files instead of branches (copies). A simple configuration change fixed the problem. However, the point is that FishEye couldn't tell the difference between a directory of files and a branch; it had to be told the difference.

The root cause of these problems is that svn is using the same mechanism -- a directory in a hierarchical namespace -- to mean two different things. This is a clear violation of the rule that similar things should appear similar and different things should appear different. Making different things use the same mechanism might seem like an elegant implementation, but it adds numerous opportunities for confusion and error.

5) Heterogenous Branches and Working Copies

There is also a convention that branches and tags are full copies of the trunk. This way, you can switch a working copy among branches, tags, and the trunk. However, it's possible to create a branch as a copy of a subtree of the trunk (or in fact of anything else). Unless you know that this was done for a particular branch, it's possible to get into a very confusing state. For example, if branches/b1 is a copy of the trunk, you can switch and merge between the trunk and branches/b1 with no problem. But if branches/x is some arbitrary subtree, switching or merging between the trunk and branches/x will do something entirely different. If you switch and you have uncommitted changes, they won't merge into the new branch, but they won't be deleted either; they'll be stranded in a working copy that's partially on the branch and partially on the trunk.

Speaking of which, the ability to have a working copy with different subdirectories at different URLs is very strange. I'm sure some expert svn users have some use for this, but to me it seems like a lot of rope users can use to hang themselves.

6) Random Merge Issues

This one is admittedly a bit nebulous.

We once ran into a case where one developer's commit undid another developer's changes. It was not a case of somebody simply botching a three-way merge. The developer merged changes from a branch to the trunk, and very carefully specified the correct revs to merge (as described above). Yet it reversed some changes that had been made on the trunk. I rechecked his merge commands and as far as I could tell they were correct. I was even able to replicate the phenomenon on a private branch. However, none of us were ever able to figure out why it happened. There was a lot of renaming going on at the time, so it's possible that merging of renames caused the problem.

We were able to recover because the "lost" changes were still present in the history, so we generated patches from the history and applied them by hand. Still, one expects the VCS to handle these cases and not force you to do things by hand. When something like this happens it really reduces one's confidence in the system.

The history might still be visible in a publicly-accessible repository. If anybody is interested in investigating this, let me know and I can track down the details.

Summary

Even though Subversion seems to be "CVS done right" I find that it has some glaring deficiencies. It also embodies some fundamental design choices that make it harder to use and understand and that increase opportunities for errors. For these reasons, I've never been happy with svn. In contrast, I've been much more comfortable and productive with a DVCS such as Mercurial.

Wednesday Mar 12, 2008

Confusing Complexity With Value

The first in my series of software development sins is confusion over complexity vs. value. More times than I can count I've run across a piece of code or a design that is just way more complicated than necessary. Sometimes this is simply a failure to apply the You Aren't Gonna Need It (YAGNI) principle. Sometimes it's simply someone falling into the sunk cost fallacy; after all, somebody put a lot effort into making something big and complicated, so it's natural they don't want to simplify it -- because that would mean throwing some of it away. Sometimes it's "hey, lookit all this code I wrote!" (even though most of it was cut-and-paste). Or sometimes it's just ego-driven: "Wow, look at this big complex system. Only a jock programmer like me could have possibly created it."

Certainly the above occur commonly enough, but I think there's something more fundamental going on. Complexity often results from a programmer's desire to deliver a more complete or more general solution to some problem. A more general solution is always better than a special-case one, right? This is often the case in something like, say, mathematics. The problem here is that software development is not mathematics: it's engineering. Usually a more general solution does provide some benefit. Often (though not always) a more complete or general solution is more complex. With complexity there are increased costs, not only of implementation, but understandability, maintenance, testing, documentation, support, and so forth. The latter include long-term, recurring costs, and they dominate the cost of initial implementation, yet they're what programmers think about the least when they're designing or implementing something. A decision to make something more complex might seem reasonable when only short-term costs are considered, but it can have long-term consequences that are hard to foresee. The question is whether the benefit of this additional generality outweighs the long-term costs of the complexity.

The engineering reality is that generality, and the complexity that often comes along with it, is just one of the forces acting on any solution in the design space. Remember this the next time you're tempted to make something more complex.

Software Development Sins

Given that the Vatican has recently released a statement describing a modern set of sins, it seems appropriate for me to expound (no, not pontificate) on the sins that can occur in a software development project. I was tempted to title this "The Seven Deadly Sins of Software Development" but (a) it's a cliche and (b) it's been done before. Many times. (Search for "software seven deadly sins" and you'll see what I mean.)

More to the point, these sins aren't necessarily deadly. They might even occur on successful software projects. Such software projects might be a lot more painful than they would be in the absence of these sins. So, perhaps these sins can be considered venial. I'm not sure if Wikipedia's theology is to be trusted, but I'll run with it. The criteria listed for a sin to be considered venial ("forgivable") is that (a) it does not concern a grave matter, (b) it is committed without full knowledge (in other words, in ignorance), or (c) that it is committed without deliberate and complete consent. Most sins committed during software projects meet one or more of these criteria.

In addition, I didn't want to constrain myself to seven sins. I'm sure I could come up with seven but I might have to pad the list a bit. On the other hand, I might think of more later, so I've made this an open-ended list. Always the optimist, I say.

Meanwhile, on to the first sin!

Tuesday Mar 11, 2008

Daylight Saving Time: Arrrgh

I had to get up early again today. Unlike recent days, it was dark. Argh.

The house was cold, because the thermostat was on a timer that I hadn't set to daylight time. Arrgh.

Arrrrgh!

Thursday Mar 06, 2008

On Software and Sausages

"To retain respect for sausages, laws, and software, one must not watch them in the making." -- Otto von Bismarck (paraphased)

The other day I ran into R, a friend and former colleague from Sun. He's now working for a customer C and they are (I think) considering licensing the software from Sun that he used to work on when he was at Sun. We had a brief exchange that went something like this.

Me: So, considering that you know how software is developed at Sun, are you still willing to license it?

R: (knowing laugh)

Me: Ah, considering that you know how software is developed at C, you're willing to license it from Sun?

R: (knowing laugh)

Friday Feb 29, 2008

Happy Leap Day!

Happy Leap Day! Today, February 29th, comes only once every four years. Of course most of you probably know all of this already. Most of you, but probably somewhat fewer, know \*why\* we have leap years. The short answer is to keep the calendar in synch with the seasons. A longer explanation can be found here.

A deeper question is, why do we need the calendar to be in synch with the seasons? I'm not sure. Some origins are probably religious, so that we wouldn't have Easter in the winter and Christmas in the fall. Other reasons might be agricultural, so that people know when to plant and when to harvest and when the Nile is likely to flood. In today's modern age this is probably no longer necessary. The fixation on seasons probably originated in the northern hemisphere; the folks in Australia don't seem to complain that Christmas is in the summer. Well, maybe they do, but I haven't heard them.

What's notable about leap year is that it's based on actual astronomical principles: the relationship of the day to the year. This is unlike Daylight Saving Time, which is purely a social construction. DST doesn't actually save anything, and it causes a lot of confusion. Occasionally somebody is an hour off the next day, and occasionally I'm startled by the odd clock that I forgot to reset. Worse, different countries change times on different dates, so the usual time zone differences -- bad enough to begin with -- are temporarily made worse.

Leap years I can live with, but let's just get rid of DST.

Friday Feb 22, 2008

More = Better ?

I had a discussion yesterday that touched briefly upon how agile our project is (or is not). The answer was that we're probably more agile than some other projects I've been on, but that we could be more agile. The underlying assumption was that things would be better if we were more agile. I think this assumption happens to be true, but it's not fundamentally true. Let me explain.

A former manager once said to me that he thought I was process-oriented. My response was, quite emphatically, "I'm not process-oriented; I'm results-oriented!" (I swear this is true. It just popped out.) The reason I'm a proponent of agile techniques is that I think we can achieve better results using them. Otherwise, what would be the point? If we made some change that was more agile, but our results didn't improve, there wouldn't be any benefit. Or if we made some change that did improve our results, I'd be in favor of it regardless of whether or it's considered "agile".

Note also that I'm using "results" in a very generalized way. For instance, if the team were to deliver the same product, feature set, quality, etc. but with less overtime and stress, and more time spent relaxing with families, and so forth, I'd consider that to be an improved result.

The default in my area seems to be for projects to be very plan-driven. Frankly, I don't think they've worked well. For this reason I've looked to alternatives, including many agile techniques. But the point isn't to be agile for the sake of being agile; it's to get better results. Let's make sure we don't forget this.

Sunday Jan 06, 2008

Herbert Keppler, 1925-2008

It was with great sadness that I learned this evening of the passing of Herbert Keppler. Mr. Keppler had a 57-year career in photography, most notably contributing to industry magazines Modern Photography and Popular Photography. Jason Schneider, currently editor of Pop Photo, has written a nice memorial for Keppler:

http://www.popphoto.com/popularphotographyfeatures/4968/in-memoriam

I remember Keppler from the late 1970's when I was first becoming interested in photography. I switched my subscription from Pop Photo to Modern Photography because the latter contained his column, "Keppler's SLR Notebook." I found his insight and analysis to be very educational, and his columns were a great influence on me.

The guy certainly knew how to keep up with technology. He even had a blog. The latest entry has news of his illness, and now that he has passed there are some comments with condolences:

http://keppler.popphoto.com/blog/2007/12/speaking-frankl.html

Mr. Keppler, I'll miss you.

Thursday Nov 29, 2007

Archival and Resurrection

It somehow seems fitting to resurrect this blog with an entry about data archiving and retrieval.

I had occasion to pull some files off a tape that was about nine years old. Nine years doesn't seem like it's that old. However, it was quite a bit of trouble. The tape in question was an 8mm data tape, physically equivalent to the old video-8 format. Everybody in the office these days (including me) runs Mac laptops or Linux PCs, so systems that support 8mm drives are hard to come by. I'm a pack rat, though, and I had saved an old Ultra2 (SPARC) workstation and an external 8mm tape drive for occasions just like this.

I plugged in the system and booted it and loaded the tape into the tape drive. Didn't read. Ohhh... my tape drive supports only low-density tapes (2GB) but this was a higher-density (5GB) tape. OK, I have another Ultra2 with an internal high-density drive. Booted it. Crap; I don't have the password to login. OK, boot from CD... darn it's so old that the latest Solaris doesn't support it. OK, boot from an older Solaris CD. But it still won't read: the tape drive doesn't work. Arrgh.

OK. Our lab guy has a bunch of spare equipment around so I asked him. The first drive he brought was a 4mm DDS drive. Oops. He came back later with the right tape drive. I plugged it in, booted, and successfully read the tape. My first attempt didn't work though... one has to use the same (or higher) blocking factor that was used to write the tape. What did I usually use? 2048? (One megabyte.) Tried that, and it worked. I was doing "tar xb 2048" and this read a bunch of stuff but had errors; this might have been caused by the stop-start motion of the tape. Trying again with "dd bs=1024k" worked fine and resulted in a tar file that had no errors. (At least, extracting files from it didn't cause tar to complain.) So, partial success: I had retrieved the files from tape and found the ones I was looking for.

Now what? Revisiting this a few days later, I decided to re-read the tape to ensure that I had gotten the right bits, then I'd archive them on other media. I tried to read the tape again, but the drive gave nothing but I/O errors. What? This used to work. Hm, that's odd, the lights on the tape drive were blinking oddly, as if to indicate some kind of error. Worse, I couldn't even eject the tape!! Rebooting, power cycling, etc. didn't work. I had left the drive powered on for several days, so I figured that the drive had overheated or something, so I powered everything off and let it cool down for a couple days. After that, I powered up the system and successfully managed to eject the tape. At that point I shut everything off and decabled the drive. I didn't try to re-read the tape for fear of getting the tape stuck again.

Well, now I have a probably-good read of all these files on disk. Clearly 8mm tape is not a viable archive medium. What is? How about DVD-R? They seem to be on every computer nowadays. This article seems to prefer DVD+R to DVD-R, but I had a whole spindle of DVD-R blanks and this data isn't that important so DVD-R is probably fine.

My Mac has a DVD-R drive so I copied the tar file over to burn it there. I inserted a blank DVD-R, which causes the Mac to create a "burn folder" as a staging area for what to burn. I unpacked the tar file there (using the command line), and it caused the Finder to hang. Crap. The files all seemed to be there, though, so I relaunched the Finder and went ahead with the burn. It complained about "7 items could not be found" which didn't make much sense to me, but I continued anyway. Checking the resulting disc showed that only 1.3GB out of 4GB actually made it onto the DVD. Into the trash. The problem might have been related to symlinks in the tar file. The Mac uses HFS+ by default, and symlinks showed up as "aliases" which might not have been dealt with properly.

OK, then, create a 4GB UFS filesystem image, mount it, unpack the files there, and then use Toast to burn an ISO-9660 disc from it. This seemed to work... though Toast complained about some files (rather a lot, actually) not conforming to standard file naming rules. Most of the problems were, I think, that names were too long. Sigh, but I went ahead and did the burn anyway. It worked, but "du" showed a discrepancy of about 100MB or so less on the disc than in the filesystem. This could be because of the different blocksize between UFS and ISO-9660. Or it could be errors. Hrmmmrm. Most of the files seemed to be there though.

As an insurance policy (hey, discs are cheap) I decided to burn the tar image as a single file to an ISO-9660 disc. Old tape lore has it that one shouldn't write a compressed archive, because if there are any errors, they'll probably ruin the entire archive instead of just a few files. I attempted to write the entire 4GB uncompressed tar image. But the resulting disc had only 2GB on it... huh? Turns out the tar file on the disc was exactly 2147483647 bytes long. (This the largest possible 32-bit signed integer.) Crap! Toast isn't large-file aware. (Maybe this is because I have an old PPC version of Toast I'm running on my Intel Mac.) Throw that disc into the trash. OK, write a compressed file anyway. The gzip-compressed tar image (.tgz) is just a bit over 2 billion bytes (but less than 2GiB) so it wouldn't run into the large-file problem. It worked. Whew.

\* \* \*

What's the point of all this? I don't think there's anything really new that I learned from this experience. However, I was reminded of some things I knew all along and hadn't paid attention to.

1) Media goes obsolete pretty quickly. The 8mm format is basically obsolete after less than ten years.

2) Time passes pretty quickly. Did I really work on that stuff nine years ago??

3) Keeping drives around in order to read old media doesn't really help much either. You might not be able to find a system that the drive connects to, you might not find the software to boot it, or the hardware itself might rot. In fact, the hardware seems to rot faster than the media.

4) Archiving files to new media is no small task. Media choice, tools, and OS/filesystem issues all conspire to create errors.

5) Anything important you should keep multiple copies of, in different formats, or just keep on line. Not enough disk space? Buy another disk, they're cheap.

6) We haven't even talked about finding software to read the old files.

7) I have a box full of old 8mm tapes. I think I have some work to do.

Monday May 14, 2007

Retrospectives and Regret

David Carlton mentioned the Retrospective Prime Directive. I think the prime directive is a great idea. A retrospective is not a place to deal with firing of slackers or whatever. If somebody is truly neglecting their job, this problem should be dealt with through other means.

I've found that many developers are defensive about their work, and deny that they've written a bug or made a mistake. They reject constructive criticism, etc. for fear that admitting they've done something wrong will be held against them. This attitude gets in the way of learning and improvement. So, in a retrospective, one of the key values needs to be a sense of personal safety, so that people don't fear being blamed for something. This lets the group focus on improving itself.

I'm not sure that "regret" is the right word. To me, "regret" means "I wish I hadn't done that." It has a very negative connotation. Perhaps "critical self-examination" is more descriptive term, though it's also more verbose. If you had a chance to go back and do it all over again, would you do it differently? If the answer is "yes" then that should be a positive: it means you learned something. Capturing what you've learned and using it to improve yourself and your group is the whole point of a retrospective. If you haven't learned anything, then something is seriously wrong.

Friday Mar 23, 2007

Mike Cohn at bayXP

I attended Mike Cohn's talk at bayXP on Tuesday (March 20, 2007) entitled "Planning and Tracking Agile Projects." Mike is quite open about all the stuff he presents. His presentation is available at http://mountaingoatsoftware.com/presentations along with a bunch of other stuff.

This wasn't so much a presentation as a workshop. A couple times Mike asked the audience to form small groups and perform some planning activities. In one instance he passed out decks of "planning poker" cards and had us play planning poker with (that is, give estimates for) various made-up tasks. The dynamics of planning poker are quite interesting. It really helps you flush out unstated assumptions and disagreements about what a task involves.

The event was hosted at Google, which provided excellent food (hummus, baba ghanoush, pita, spinach in phyllo, falafel, etc. plus an excellent selection of beer, wine, and soft drinks). Combined with Mike's excellent talk, and the low, low price of admission -- $0! -- this evening was an exceptional value.

Thanks to Mike, the bayXP organizers, and Google.

Tuesday Mar 13, 2007

Is The Triple Constraint Agile?

Let's revisit this "triple constraint" business I blogged about in my previous blog entry. What's this doing in the "Agile" category? Isn't the triple constraint that thing from the Project Management Body of Knowledge (PMBOK), which is about classical non-agile waterfall stuff? In a word, no. I don't know what much about the PMBOK, but I believe the triple constraint applies to any project, regardless of how it's planned.

The triple constraint is a model for doing reasoning about planning. The style in which planning is done, whether waterfall or agile or something else, doesn't affect the fundamentals of the model. The style certainly affects the outcome of the plan, but that's a different story.

Consider a waterfall, plan-everything-then-execute approach for managing a project. The triple constraint certainly applies here. You're given a certain resource (cost) budget, a desired scope for what to deliver, and a target date for delivery. It's usually the case that you develop time estimates for everything and the end date that pops out is far later than the target date. So you have to add resources or cut features to pull in the date. You have to make the tradeoffs the model implies before you can formulate a credible plan for delivering. Then (theoretically) you execute according to plan.

Now consider an agile process. It doesn't matter which, really, since they pretty much all use an incremental planning model. During each planning cycle, you have to make the same tradeoffs among cost, scope, and time. The difference is that you do this incrementally instead of all up-front. For example, consider an urgent customer request that comes in the midst of a release cycle. At the next iteration planning meeting, you reshuffle the project backlog to put the customer's requests at the top, which pushes everything else down. This implies that you can deliver the original set of features at a later date, or you can deliver a reduced set of features at the same date as originally planned: a classic triple constraint tradeoff.

The key point is that incremental planning provides a structure for making these tradeoffs on a regular basis. By contrast, in a waterfall approach, making changes of this sort is viewed as a project planning error: you have to replan everything. This example embodies the line from the Agile Manifesto, "Responding to change over following a plan." The alternative is not to make the change at all, which gives rise to the idea that waterfall approaches tend to resist change.

Agile planning approaches embrace the triple constraint wholeheartedly as a normal part of the planning process, whereas waterfall approaches make all the tradeoffs up front and hope they stick.

Wednesday Mar 07, 2007

The Triple Constraint

Here's a bit of elementary project management: the triple constraint. This was explained to me by an experienced project manager colleague some time ago, who drew a diagram similar to the one below on my whiteboard.

The "Q" in the middle stands for quality. The point is that the variables are interrelated: changing one affects both of the others. If you don't account for the changes to the others, you "break the triangle" and it's quality that suffers.

Now I don't really have any formal project management experience, but I've been on a lot of projects. I've been on projects where the end date and the size of the team ("cost") were fixed, but scope was increased. And what do you know, when we got to the end date, there was a huge pile of bugs. When my colleague presented this model to me it seemed blindingly obvious. Yet, it seems that many people just don't get it.

(Some literature considers "triple constraint" to be an obsolete term, preferring instead to consider cost, quality, scope, and time as a set of four variables. This goes against the Received Wisdom of Quality, which states that Quality shall not be a variable. But I digress.)

Upon further reflection, it's not that people don't "get" the triple constraint. The real problem is that people are unwilling to confront the results of thinking through the model.

Now, like any model, there are some qualifications. I'm thinking of two in particular: "in the long run" and "all other things being equal." I'll cover these in separate blog posts.

About

user12610707

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today