Thursday May 24, 2007

Table of Contents

Those who read this blog know that I'm working on a book, what I've called a field handboook for project leaders. For the last six months I've been writing the book one blog posting at a time. I'm about half way done, with over 70 pages and more than 30,000 words written so far. At this rate, I should be done by the end of the year.

I was telling someone that my blog was a book. But a book should be read from beginning to end, and blogs tend to be organized in reverse chronological order (and finding the first post can take more than a few clicks). So I thought it might be useful to publish a table of contents, with links to the posts. If nothing else, it's much easier to scan. If this works out, I'll see if I can embed the table of contents right into the masthead.

OK, here's the title page and table of contents of the book...




The philosophy, art and science of software project leadership.
Robert J. Hueston

Table of Contents

Copyright 2007, Robert J. Hueston. All rights reserved.

Friday Apr 27, 2007

Plan Analysis: Ideas, Inspection and Intuition

In my posting Know what you're doing (Part III) I introduced the concept of engineering a plan. One of the key steps in engineering, plans or products, is analysis.

This is the fourth in the series of analytical techniques for plans, and includes three fairly simple items to finish out the list: Ideas, Inspection and Intuition.


As a young engineer, fresh out of college, I recall getting a small project from my boss. I worked on it for a while and then got stuck. I struggled with a specific problem for a day or two, at which point I felt defeated. In disgust, I went back to my boss to tell him I couldn't handle the task. He was upset, to say the least. "Why didn't you ask me earlier?" he asked. He already knew the answer. Engineering is not a college test or term project. He told me that the best engineers are often lazy, and lazy engineers borrow (Copy? Steal?) ideas from others. It's more efficient to find a solution that already exists and works well, instead of inventing a new solution for every problem. In school, you're rewarded for doing your own work and not copying from others; in engineering, you're rewarded for copying as much as possible.

When working on a plan, getting ideas and information from others, especially more experienced people, is important. There are many ways to get ideas -- reading others' work, asking, brainstorming. Just don't feel like you have to solve planning issues in a vacuum.


Everyone makes mistakes. We're human. We forget things. We're not always thorough. We don't always think everything through. Every process must assume human error, and work to ensure that the cost of human error is minimized.

When engineering a product, it is common to invite peer engineers to inspect the design. Another engineer may look at a design and identify logic flaws, raise questions about the design's tolerance to errors, or just ask questions that cause the author to re-think his own work. When engineering a plan, we should have a process to invite peer project leads to inspect the plan. Like a code walk-through or a schematic page-turner, the plan author and inspectors should walk through the deliverables, tasks and measurements that make up the plan.

In my blog postings, you may have noticed I avoid the the word "reviews" and instead use "inspections." The reason is partially semantic:, for example, defines "inspect" as, "to look carefully at"; while "review" can mean, "a general survey of something." When you hold a "review" participants may feel invited to casually scan the material; when you hold an "inspection" participants tend to feel more of an onus to pay close attention and review the material in fine detail. [In a later posting I'll blog about an effective inspection process.]

Beware that your boss (manager, marketing, venture capitalist) is not a peer. Your boss's goal may be to encourage you to reduce cost and reduce time to market, while also increasing product features and holding firm on quality. That's their role -- to try to get more for less. The project lead's role, on the other hand, is to engineer a realistic and achievable plan. On several occasions I've had project leads come to me when their manager has told them that a plan's schedule was too long, and they needed to pull it in. They naively believed their manager was trying to help come up with a better plan. They weren't; they were trying to get the plan to align with some other strategic milestone. If the project lead has done a good engineering job on the plan -- they've done their analysis, used independent methods like CoCoMo to confirm their measurements, and have had peers inspect their plan -- then the only response to their manage should be: If you want to reduce the schedule, we either need to add people or drop features.


My definition of intuition is: A subconscious analytical process based on historical data and personal experiences. In other words, when that little voice in the back of your head tells you something is wrong, it's probably because you subconsciously have analyzed the situation and found a problem. The trick is getting your subconscious to cough-up the details.

The subconscious is a powerful analytical engine. Years ago I worked at a videoconferencing company and learned about lip-sync -- the degree to which the audio (a person's voice) is synchronized with the video (a person's lips). If the audio and video are out of sync, even by small amounts, most people can not identify the problem, but their subconscious reacts; they feel discomfort, stress or nausea (in my case, it was the last). For an example analysis, see Effects of Audio-Video Asynchrony on Viewer's Memory, Evaluation of Content and Detection Ability. Interestingly, people can tolerate when the audio lags the video by up to 45 milliseconds, but they are bothered when the audio leads the video by as little as 10 milliseconds. This is probably because the human brain has learned that sound travels slower than light, so it is normal to see motion first, then hear the associated audio. But it is completely unnatural to hear the audio first, and this causes the brain to rebel. Subconsciously, the brain processes the auditory and visual stimuli, determines what is appropriate and not appropriate, and notifies the rest of the body that something is very, very wrong.

Similarly, a person with significant experience may look at a project plan and feel uncomfortable, stressed or even nauseous, but not be able to identify the problem. When you ask them what's wrong, they might say, "I don't know. It just doesn't feel right." It's tempting to ignore their comments, but I've learned that what they're really saying is, "My subconscious is using my many years of experience to analyze your project plan, and it's finding major issues, but I don't know yet how to verbalize the results of that analysis."

How can you help the person identify the real issue that their subconscious is flagging? Using the audio/video lip-sync analogy, you might cover up all except part of the screen, and ask if this corner of the image is bothersome. If not, repeat the process using another portion of the screen. Eventually when you uncover the lips, and the audio is not perfectly synchronized, the person will immediately feel discomfort. And with their attention focused on the lips, they will consciously recognize the lip-sync problem. The same can be done with a plan. Draw the person's attention to the high-level list of deliverables. Is this the problem? If not, ask about the detailed tasks and deliverables, the staffing, the schedule, the risk remediation and contingency plans, the list of required equipment and space needs. Hopefully, once the person's attention is drawn to a specific area, they will be able to verbalize their specific concerns.

Trust your own intuition, especially if it has a good track record. And trust the intuition of those who have been successful in the past.

Other Plan Analysis Techniques:
Copyright 2007, Robert J. Hueston. All rights reserved.

Wednesday Apr 25, 2007

Plan Analysis: CoCoMo

In my posting Know what you're doing (Part III) I introduced the concept of engineering a plan. One of the key steps in engineering, plans or products, is analysis.

This is the third in the series of analytical techniques for plans: CoCoMo.


CoCoMo is the Constructive Cost Model, which is an empirical model for software development projects. The model was created by examining many projects, from small to large, simple to complex, using various programming languages. You give it the number of lines of code, and other information about your product, your team, and your development environment, and it tells you how long projects like this normally take. The model is very accurate; quite frankly, eerily accurate.

I was first introduced to CoCoMo in 1987, when I was working in the aerospace industry. Over the ten years that followed, I used CoCoMo as an integral part of all software planning. Even after I left aerospace for commercial product development, I continued to use CoCoMo and evangelize it to others.


CoCoMo works by, well, quite frankly, I have no idea how it works. It just does. The CoCoMo model was developed by reviewing many projects, from small to large, embedded to interactive, and an equation was developed that best fit the empirical data. Actually several equations were developed -- a simple (basic) version with just a few variables, to a complex (expert) version with dozens of variables.

When I learned how to use CoCoMo, there were worksheets that you'd fill out, then you'd spend a few minutes crunching the numbers and equations. One of the first things I did as a junior engineer was put the equations into a Lotus 123 spreadsheet. [This was back when most engineers had TI-55 III calculators and some had a VT-220 terminal on their desk. Few even had PCs or knew what Lotus 123 was. I wonder how many young engineers today know what Lotus 123 was.] Today, there are online versions including one from The University of Southern California which greatly simplify the task.

The new tools are very simple to use. You start by entering the number of source lines of code, new, reused or modified. Then you answer several questions, to define "attributes". The lines of code plus the attributes constitute the "variables" of the CoCoMo model equations. As a suggestion, leave all of the attributes at "nominal" and review the questions to see if any attributes really should be adjusted up or down; nominal works for most things. The attributes are divided into four categories: project, product, platform and personnel, described below.

Product attributes include how reliable the product needs to be (is it the software that controls the autopilot system for a commercial jetliner; or xeyes), size and complexity of the database, and product complexity (are the algorithms well understood, or cutting edge).

Project attributes cover how the project is executed: the use of engineering tools and development methodologies, extent of distributed collaboration required, and the overall schedule demands.

Platform attributes include execution and memory constraints (is the platform an 8051 with 128 bytes of RAM, or a high-end server with 128 CPUs and a terabyte of RAM?). It also includes platform volatility (is the hardware still in development, or is it a mature product that is already shipping).

Personnel attributes address how capable the engineering team is, their familiarity with the product, the platform, and the language. There is always a tendency to claim your team members are above average, but in reality, most teams are "nominal".

After setting all of the attributes, you click a button, and it gives you a measure of the staff-months to execute the project, as well as a schedule (calendar months) measure. This isn't to say that your project will take this long or cost this much. But it is a measure of what similar projects have cost.

CoCoMo: Historical Examples

As an example, take a small project my team just completed. It took two years to develop the software. The first year I had two people working on it; the second year and a quarter there was just one person available to work on the project. Total cost was approximately 39 staff-months, and in the end there were 9,600 lines of C++ code.

The software ran on an existing OS, and existing CPU, with enough memory and storage. But it was controlling a newly designed hardware system that attached to the computer, so I set the platform volatility attribute to "high". I left all other attributes at nominal -- if I wanted to spend more time, I could probably tweak them, but for a quick demo, I just accepted the default settings. I plugged these values into the CoCoMo tool, and it predicted a cost of 37.9 staff-months -- within 3% of the actual cost. CoCoMo also predicted that the project could have been completed in about a year with a little more than three people. Perhaps, but my schedule was driven more by staff and hardware availability, not time-to-market.

Another project I completed recently had 95,000 lines of code, and took a team of 12 people just under three years to complete and ship. That comes to about 420 staff-months of development. I plugged the 95,000 number into CoCoMo, and since there was nothing Earth shattering about the project I left all the attributes at nominal. CoCoMo came up with 439.8 staff-months. OK, CoCoMo is high by almost 20 staff-months, but keep in mind that's an error of only 4.7%. Pretty good, when you realize that all I gave CoCoMo was one number: the lines of code. Next time, I'll tell CoCoMo that my team is above average in capability.

Using CoCoMo on historical data is a nice confirmation of the model. But a model is only valuable if it can predict the future, and this model is only useful if your variables are accurate. Specifically, you need an accurate measure of the source lines of code that you're going to develop or reuse. But quite frankly, I often find it's easier for many engineers to tell me the amount of code they need to produce rather than the amount of time it's going to take them.

If you can find another project that is similar, it can be fairly easy to come up with a reasonably accurate measure of the lines of code you will need to develop. Consider the last example. I have an old email from back before the project started where someone points out that this project is about half the scope of some other project we finished the previous year. I just checked, and the last project developed 208,000 lines of code. With 104,000 lines of code, CoCoMo predicts 485 staff-months of effort. Using very quick and rough back-of-the-envelop measure predicted the number of lines of code to within 10% of actual, and CoCoMo gave us a measure of the staff costs to within 15% accuracy. Not bad for 10 minutes of analysis. Comparing that to the four weeks we spent at the start of the project listing all of the high-level requirements, decomposing them into tasks and sub-tasks, and creating plans, CoCoMo is much faster and more accurate.

CoCoMo in Plan Analysis

CoCoMo does not develop plans for you. It is a tool for analyzing plan data.

I find the best way to use CoCoMo is to confirm or contradict the detailed planning work that you are doing. After defining your tasks and measuring them, the work adds up to some total cost for the project. You can then use CoCoMo to see if the sum of the tasks is reasonable, as a sanity check. If your detailed plan differs from CoCoMo by more than, say, ten or twenty percent, I would start to worry.

I've also found CoCoMo to be an excellent independent tool for defending project cost. When selling a plan, you can present detailed plans and show how you came up with your costs. Then you can present how CoCoMo confirms your analysis with a similar cost figure. When a person can back up a plan with a well-established model such as CoCoMo, it adds a lot of credibility to the plan.

Other Plan Analysis Techniques:
Copyright 2007, Robert J. Hueston. All rights reserved.

Monday Apr 23, 2007

Plan Analysis: Smart Deliverables

In my posting Know what you're doing (Part III) I introduced the concept of engineering a plan. One of the key steps in engineering, plans or products, is analysis.

This is the second in the series of analytical techniques for plans: Anayzing deliverable definitions.


Analyzing a product (a circuit board, or a software library for example) usually involves identifying the inputs, and how those map to outputs. Analyzing a plan involves the same thing.

In a product design, for example designing a circuit board, you might start with the inputs and outputs of the board. As you decompose the design, you specify the inputs and outputs of functional blocks, the inputs and outputs of chips, and perhaps if you're design includes FPGAs or ASICs, the inputs and outputs of functional blocks within the chip.

Inputs and outputs must by well defined. It would not be acceptable to say a signal should try to do its best to go high when an input goes low. Or a signal should change state "at some point" in the future without providing a tight time limit. It's also fruitless to define output signals that are not needed or used by any other circuit (we might realize that the person designing the chip will probably need such a signal internally, but we don't spend time trying to guess how a chip might work internally). [For software engineers, consider the old K&R method of declaring function names without function prototypes. You had to guess what the inputs and outputs of each function were, the compiler didn't enforce them, and if you got them wrong, you wouldn't know until the code didn't work right. If you were lucky, the program would seg fault; if you weren't lucky, it would misbehave in subtle ways. Tightly specifying function inputs and outputs, including the number of arguments, their types, and whether they were const or variable was a key addition to the C language that made it possible to engineer large-scale applications in C.] We all know that if an input or output is not properly specified, the result will likely be a product that just doesn't work. We all know this, yet when we engineer a plan, a common problem is vague definition of inputs and outputs.

For a project, inputs and outputs are deliverables: what you deliver from your project are your outputs; what you need delivered to your project are your inputs. Deliverables must be specified with the same sort of engineering rigor as electrical signals in a circuit design.

When we say that something is well-defined, we usually mean it is specific, measurable, achievable, relevant, and time-bound: SMART.


Specificity in engineering is commonplace. But there also tends to be a lack of specificity when engineering a plan, when we describe the outputs of a task, or the outputs of the entire project. What, specifically, will be delivered.

As an example, I've often seen the task deliverable "code complete" in a plan. "Code complete" is not very specific. To one person, it may mean that they have finished typing the code, but they haven't compiled it yet. To another person, it may mean that the code is written, compiles cleanly, has been inspected by peers, and completes a set of unit tests (after all, how do you know it's "complete" unless you've tested it?). Who's interpretation of "code complete" is correct? They both are, because "code complete" is not specific and open to broad interpretation. Who's fault is it that the deliverable does not meet expectations? The project lead's.

Another slightly humorous example I saw recently was a schedule item called "unit testing complete". The owner claimed he was done, and indeed he had run all the unit tests on his code, but half of them failed. The project lead felt "unit testing complete" meant "unit tests run and all tests pass". Or when the project lead thought "requirements complete" meant the requirements document was reviewed and approved, while the other thought it meant the document was written and ready to start being reviewed. As a general rule of thumb, if a deliverable has the world "complete" in the definition, then it probably isn't defined completely.

One easy way to address specifics is to define a set of standing rules. For example:

    Not Specific Specific
    Requirements complete Requirements documented, all issues and TBDs resolved, and reviewed and approved by all applicable parties.
    Design complete Design complete, meets all requirements with no outstanding issues, and has been reviewed and approved.
    Code complete Code written, compiles cleanly with no errors or warnings, meets code style guidelines, has successfully completed code inspection, has completed and passed all unit tests, and has been checked into the source code repository.
    Testing complete All planned tests have been executed, and either all tests have passed, or bug reports have been submitted for all failures.
These are clearly just examples, meant to highlight how you could define "complete" in specific terms. Being specific in a plan is as important as being specific in product design.

When analyzing a deliverable to see if the definition is specific, ask yourself: Would everyone have the exact same understanding of the deliverable? If not, then the definition is not specific.


When we talk about something being "measurable," we're really saying that we can prove empirically that it's true. In hardware design, a requirement like "the output should toggle really fast" is not measurable, so we can not judge if an implementation actually meets the requirement. I could just imagine the quality of a product if the hardware "should do its best to meet most of the set-up and hold requirements" or if the software "should be small and execute fast." The product would likely be unusable; the same is true for a plan that lacks measurable deliverables.

It's sometimes hard to separate the discussion of "specific" and "measurable." If you're not specific, then rarely are you measurable. And in the previous section, I tried to provide examples that were both specific and measurable. On the other hand, it is possible to be specific and still not measurable.

For example, there could be a deliverable such as, "All unit tests execute and pass." It is specific in that the unit tests must exist and both execute and pass. But how do you know that the unit tests are sufficient? If I wrote one unit test, executed it, and it passed, am I done? On the other hand, "All unit tests execute and pass and provide 99% statement coverage as measured by gcov" would be specific and measurable -- it tells you what the completion criteria is and how to measure it. [gcov is a tool that measures which statements of code have been executed.] Anyone could inspect the gcov coverage report to see empirically that the deliverable met all its requirements.

When analyzing a deliverable to see if it's measurable, ask yourself: How do I know it is done? If you're having trouble with the answer, then the definition of the deliverable probably isn't measurable.


It seems ridiculous to have to point out that deliverables must be achievable. But when I started to think about this, unachievable requirements for deliverables are far more common than I had ever realized.

Take, for example, a requirement like, "The code will be complete and bug free before delivery to QA to start testing." In large, complex systems, it is virtually impossible to be bug free, let alone bug free before testing. So what's the problem with a requirement like this? For one, developers will read it, recognize that it's unachievable, and laugh it off without further consideration. Perhaps the requirement gets changed to, "The code will be complete and mostly bug free..."; however, that's not measurable. Maybe the real intent was, "The code will be complete and all bugs found during unit testing will be fixed or waived by the manager of QA before delivery". This last requirement is achievable, measurable and specific.

Be careful of requirements that have words like "all", "none, "never" or "always" in them. That can be a flag that the requirement is not achievable. Note that in the previous paragraph I had "all bugs... fixed or waived..." You may come across a situation where one bug is not resolved. If the deliverable is defined as "all bugs fixed" then you'll have issues to deal with while executing your plan (you'll be running around trying to invent a waiver process); it's better to establish achievable requirements up front so that execution can go smoothly.

When analyzing a deliverable to see if it's achievable, ask yourself: Am I 100% certain this specific and measurable deliverable can be met? If you are unsure, you may be dealing with an unachievable deliverable..


Obviously, deliverable definitions should be relevant -- don't specify the color of the paper that a document should be printed on (especially if it's going to be distributed electronically).

But there's another form of relevance that is often forgotten during planning: Don't specify deliverable outputs that aren't inputs to someone else. Seems obvious, but apparently it isn't because I'm constantly seeing plans that include deliverables (documents, code deliveries, etc) which are not needed by anyone else. In many cases these are "internal signals," things that probably must be done as part of the task working toward the deliverable, but they are not deliverables themselves.

I don't know how many times I've been at project reviews, and a project lead reports that a deliverable, for example the stack usage analysis, is slipping its schedule. The VP will undoubtedly ask, "Who is affected by this delay." And the answer is usually, "Well, no one. It's just used internal to the team." If no one is affected if a deliverable is delayed, then probably the deliverable is not relevant.

Of course, relevance may have many layers. From a "product" point of view, an internal deliverable may not be relevant. Within the project team, if one team member does not deliver X (an internal deliverable) to another team member, then the schedule for product deliverable Y could be at risk. When working within the team, X is relevant; when discussing the project with external people, then Y is relevant.

When analyzing a deliverable to see if it's relevant, ask yourself: Who would care if this deliverable is delayed or canceled? If the answer is "no one," then it's probably not relevant.


Time-bound means there's a due date, a time when the deliverable must be available. It's pretty easy to tell if a deliverable has a time boundary; but it may take some analysis to tell if the time boundary is a good one.

When we put together a schedule we usually have a date when something should be done. But a plan is not an estimate of when you might be done, it's a promise of when you will be done. [That sentence has become sort of a mantra with me and my teams. I can now say the first half, and almost anyone who's worked with me will finish it.] A gantt chart is not a plan; a schedule is not a plan. A plan is a promise, a contract.

If your best engineers got together and thought they would be done with a product by January 1st, your schedule might say January 1st. But if your boss (manager, marketing, venture capitalist) asked, what is your "drop dead" date, the date you promise you will be done and if you miss that date you're fired? You might not pick January 1st. You'd pick a date that you were 95% confident that your team would be done. Maybe February 15th.

A plan should document the dates you promise deliverables. It's fine to say you might be done January 1st, but you're willing to promise February 15th. You'd continue to work your team toward an early finish date of January 1st, but when you report on your deliverables outside the team, you'd report on your confidence of hitting February 15th. And if you finish January 1st, or January 31st, or February 14th, people may be pleasantly suprised, and you will have met your promise.

When analyzing a deliverable definition to see if it's time-bound, ask yourself: Do I know exactly when it's due, and can I promise to meet that date? If you're not sure of the answer, then the time boundary may only be an estimate, and you need to make it a promise.


When analyzing the deliverables from a project, ask yourself if they are SMART. Ask yourself:
  • Specific: Would everyone have the exact same understanding of the deliverable?
  • Measurable: How do I know it is done?
  • Achievable: Am I 100% certain this specific and measurable deliverable can be met?
  • Relevant: Who would care if this deliverable is delayed or canceled?
  • Time-bound: Do I know exactly when it's due, and can I promise to meet that date?
Some of this may seem pedantic. But the results of your analysis will yield better defined deliverables, and fewer surprises in the long run.
Other Plan Analysis Techniques:
Copyright 2007, Robert J. Hueston. All rights reserved.

Friday Apr 20, 2007

Plan Analysis: Risks and Dependencies

In my posting Know what you're doing (Part III) I introduced the concept of engineering plans. One of the key steps in engineering, plans or products, is analysis.

Back in college, I took many analysis courses related to my major. Circuit Analysis I and II, Engineering Analysis, Statistical Analysis, and the content of many other courses stressed analysis. A good engineering education must include a good foundation in analysis. Engineering a plan is no different. I wanted to present a few analytical techniques for planning.

The first in the series is risk and dependency analysis...

Risk Analysis

Three common sections in a project plan are: Assumptions, Risks, and Dependencies. I hate assumptions; all assumptions are risks, you're just not planning on dealing with them. If it were up to me, the word "assume" would be banned from project plans. Dependencies are similar. If a dependency has already been satisfied, then it simply "is". If a dependency has not already been satisfied, then there's a risk that it won't be satisfied. You don't manage dependencies; you manage the risk that dependencies will not be met in a timely fashion.

One way I like to analyze risks and dependencies is a simple table, with columns for:

  • Risk: A description of the risk or dependency, in just enough detail so I remember what I was afraid of. Some people are fanatical that it must be worded as a risk (for example, "Hardware schedule" is not a risk, but "The hardware schedule might slip" is a risk.). I'm not fanatical about anything; whatever works for you works.
  • Likelihood: How likely it is that the risk will evolve into a real problem. The likelihood may change over time; something that is unlikely to be a problem at the start of a project may become very likely when the due date is approaching and the risk has not yet been avoided. I like to simply rank the likelihood. You can use any rating system (a scale of 0 to 100, for example), but I prefer the simple high, medium, low ranking.
  • Impact: What is the impact if the risk becomes a real problem. Again, any rating system can be used, such as high, medium and low. Impact is a bit subjective, but it should address the impact to the overall product. For example, a high impact risk is one that could cause the entire product to be canceled or significantly delayed. A low impact might mean increased cost or a small impact to product schedule.
  • Remediation Plan: This is what I'm going to do to ensure that the risk does not become a problem. For dependencies, this might include communicating with the supplier early and often, tracking interim milestones, etc. For technical risks it might mean doing early prototype work, or adding subject-matter experts to the team.
  • Contingency Plan: This is what I'm going to do in case the risk evolves into a problem, that is, in case my remediation plan has failed.
I believe separating likelihood and impact is essential. Too often we concentrate on risks that are very likely, but have low impact. What if Joe misses his deadline by a day: highly likely perhaps, but if it's only a day, then it may be low impact. Some people tend to worry too much about risks with high impact but low likelihood. I've actually seen people list dependencies that have already been delivered, just because it would have been really bad had they not already been delivered. Forcing myself to identify if the risk is highly likely or highly impactful helps me concentrate on the risks that will most likely cause the most problems.

Below is an example of a portion of a risk analysis table.

    Risk Likelihood Impact Remediation Plan Contingency Plan
    Delays in the hardware schedule may delay prototype availability, and impact boot-code testing. Medium High Attend the monthly hardware status review so that we have early notice if the hardware schedule is slipping. Spend extra time up-front to improve the simulation environment so that we can continue development even if hardware is delayed.
    If likelihood increases to "high" before Dec 1, order additional systems for the lab so we can reduce integration time by doing more testing in parallel.
    Company XYZ must deliver a driver for their network card to support first power-on and boot. High Medium Contacted XYZ and informed them of our technical and schedule needs. Working with Legal department to get legal agreements in place. Joe in Supplier Management will contact XYZ monthly until the driver is delivered. Although the ABC network card will not be used in the product we already have the driver and legal agreements in place.
    If we don't have the XYZ driver by Dec 15, will purchase a dozen ABC network cards for power-on testing.
    If we don't have the XYZ driver by Feb 15, will be unable to start performance testing and the product release will be delayed.
    Plan depends on buying libraries from DEF. Low Medium Purchase order is already written. Management has indicated that they will approve it. If management does not approve the purchase order by May 5th, will need to assign 3 engineers to start work on a proprietary set of libraries. This will delay project completion by six months unless additional staffing is added.

To create the above table, I have a simple CGI script (written in PERL) which allows me to edit the various fields using my web browser, and allows others (managers, my team members, and other teams) to view my risks whenever they want. I've used this successfully on several projects. [Maybe some day when I write my book, I'll include a CD with all the CGI scripts I use to lead projects. :-) ]

Colors? Where did the colors come from? I've found that colorizing risks has two benefits: (1) It draws your eye to the things you should worry about the most, and (2) Managers often lack the time or attention span (and sometimes the ability) to read long sentences, so they either need cute graphics or colors. And since I'm not good enough at CGI to produce tachometer gages, traffic light graphics, or pie charts, I just colorize the rows. For my own purposes, I assume a likelihood or impact of "high" is worth 3 points, "medium" is 2 and "low" is 1. Multiplying the two together yields the overall risk: 9 is critical (red), 2 or less is under control (green) and everything in between is a serious risk (yellow).

There's a fourth color: blue. I'll set the likelihood to "done" to show that the dependency has been met, or the impact to "none" if the risk has passed. "Done" and "none" have a rating of 0, so if either is 0, the risk becomes 0, so the item is closed and the row is colored blue. I might mark a risk closed and leave it in the table for a few weeks before finally deleting it.

Early in the planning phase, you may come across a lot of risks, such as the risk that development will take longer, or emergent tasks will arise. But as you do analysis, you should start planning for problems and a reasonable number of emergent tasks. Once you plan for problems, then it's not a risk that those problems will arise; it's the plan. In effect, the impact drops to "none" since the plan already accomodates these problems. When you're done with the planning phase, there should (hopefully) be few true risks that your plan does not already fully address.

When a good process becomes a bad methodology

I found this approach to be very useful, as did others. One day someone decided to establish a formal process for creating and using the risk analysis table. Instead of CGI and a web page, they created a spreadsheet.

In addition to likelihood and impact, they added "visibility" (your ability to observer the state of the risk; presumable risks that are hard to monitor warrant closer scrutiny). With three factors, all rating between 0 and 5, there were now 125 different "states" a risk could be in, so an appropriate number of colors were added to the rows -- chartreuse, fuchsia, and a few colors I didn't even know existed (and I'm not even sure they had names). The spreadsheet also included columns for things like who owned the external dependency, what was their promised date, whether they had agreed to your need date, when you talked to them last, and when the row had been last updated (just to make sure you were checking and updating your risks regularly).

The spreadsheet ended up with so many columns, it was impossible to view them all at the same time, even on a 21-inch monitor. Since this was a spreadsheet and not a web page, it became more difficult to share it. I was told: post the spreadsheet on a web page, and people can download it as a file and open it. (I've found that most want information immediately, and if they have to download a file, their patience is exhausted and they don't bother.)

Soon, a team of people was responsible for making sure that every project leader had a risk and dependency spreadsheet. The "Spreadsheet Police" would check periodically to make sure you were updating your spreadsheet regularly. At quarterly program reviews with the engineering vice president, we were required to display and the spreadsheet (shrunk down to an unreadable 6 point font and projected onto a screen) and discuss it with the VP.

A simple, informal process had become worse than a formal process; it had become a methodology. A bad methodology.

Project leaders hated the process. It didn't help them manage risks and dependencies, and only wasted their time updating useless information. Managers and VPs were frustrated because the display was too small to read and the content too detailed to absorb at their level of interest. Eventually, the entire process was scrapped.

The next day, I spun up my CGI script, and I was back to using my old web page for tracking risks and dependencies, and I've been using it ever since.

The moral of the story is simple: Follow the processes that help you, in a way that helps you the most. And if you do find a process that works well for you, don't tell anyone, or they'll turn it into a methodology!

Copyright 2007, Robert J. Hueston. All rights reserved.

Wednesday Apr 18, 2007

Know what you're doing (Part III)

Engineers should be the best project planners in the world. They aren't, but they should be.

The problem is, planning is rarely presented to engineers in that way that they can attack. Planning is viewed as some mystical art that few people have the skills to master. People who are naturally good planners employ a process (many without even realizing it) to decompose a problem into a plan. If you don't understand the process they followed, they may appear to be Mystic Planners.

Once we peel back the shroud of mystery, we can see that planning is an engineering problem -- you have to engineer a plan -- and no one is better skilled to solve an engineering problem than an engineer.

To master the process, we first must understand it. I'm no big fan of Six Sigma, but I will admit that the Six Sigma guys were great at inventing new acronyms (I suspect they spent most of their time creating cute acronyms, then desparately trying to come up with words that fit them). Several Six Sigma acronyms have the form DMAxx, where the letters DMA stand for define, measure, and analyze. The last two letters depend on what you're doing: IC to improve and control a process, DV for design and verify a new process. The first three letters basically mean "know what you're trying to do" and the last two letters mean "go do it."

I'm going to borrow some of the Six Sigma letters to define a new acronym for engineering a plan:

    DMADE: Define, Measure, Analyze, Design, Execute
Each of these terms are common engineering terms, and whether we realize it or not, these are the steps most of us follow whenever we engineer a new product, and a plan is just another engineering product.

DMADE And Engineering a Product

To help define the DMADE terms, let's first consider a concrete product engineering example. Consider, for example, my first engineering project back in college: To design an adapter board for a PDP-10 minicomputer to an off-the-shell floppy disk controller. Sure, I had had many labs and term projects, but this was really my first engineering project. We hadn't studied the PDP-10 bus architecture, nor the floppy disk controller architecture. We were given the assignment, the reference manuals, a wire wrap extender card (I still have my wire wrap tool, right on the desk in front of me), and a bag of lsi chips, resistors, and capacitors. [Now, don't laugh too hard. The PDP-10 was already obsolete by the time I went to college. Why else would they allow undergraduates to fiddle with it.]

I remember being so lost, so adrift with this project at first. Where to start? In earlier classes, you would get a simple lab assignment, then immediately sit down and start wire-wrapping. So I sat there with my wire wrap tool in hand, staring at the blank board for what seemed like days (of course, 19 year-old boys have notoriously short attention spans, so in hindsight it was probably more like two minutes before I gave up and headed out to the Copper Mug for some fried mozzarella sticks and a beer. Oh, back then 19-year-olds could legally drink, too, which made this project all that more challenging). Finally, after nearly six days of procrastinating (and the night before the project was due), I decided I needed to attack this problem like an engineer.

First, I needed to define what I had to do. Something like: Convert I/O read/write cycles at a specific address into read/write accesses to the FD controller board. Down the left side of my page I wrote the list of signals coming from the PDP-10; down the right, the signals into the FD board. But that was still too general. I then broke the problem down further into block diagrams -- how to handle the address/data bus, how to handle the read/write enables (I seem to recall one interface had separate read and write signals, while the other had a combined read/~write signal). Then the clock had to factor into these blocks as well. Soon I had a well-defined problem. I could look at a block and know what it had to do, how it would use its inputs, and how its outputs related to the other blocks. I had decomposed the problem into quantities I could handle.

Next, I needed some measurements, that is, some quantitative data. There were set-up and hold time requirements, clock pulse width requirements, fan-out limits, and of course the logical relationships of inputs to outputs. I added that information to my block diagram, and I really started to feel like I understood what I was doing.

Next came analysis. I had to figure out how to synthesize the logic in each functional block, while also meeting the timing and loading constraints. Analysis came easily. That's the area where most of our training had been so far -- taking a simple block diagram or a set of equations, analyzing the relationships, and producing a small schematic representation.

Now I was able to finish the design. The functional blocks got mapped to real chips, and the chips were mapped to a physical layout. There was some back-and-forth with previous phases, when I realized I didn't have enough NOR gates, or I had spare NAND gates I could use instead to save me from wiring up another chip. But mostly this design phase was the inverse of the define phase. In the define phase, I decomposed the problem into small, easy-to-quantify and analyze chunks; now I was combining those chunks back together into the final product. At least on paper.

Finally, it came time to execute the design. I placed chips on the board, and wired them together based on my design. I powered on the board with a benchtop power supply and probed for proper voltages and signals. Then I plugged the board into the PDP-10 and powered it up and made sure there was no smoke and the monitor program ran normally. Then I inserted the floppy disk controller and powered it all back on again checking for smoke. The final step was verifying that the product was complete -- I had to format a floppy disk and write a file with my name on it as as proof that our design worked (this was a course in hardware design, so thank goodness the drivers were provided for us).

DMADE And Engineering a Plan

So, that long story was about product engineering, with the point being that all of us engineers go through the DMADE steps all the time. They are engineering steps. These same steps can be applied to engineering a plan.


In planning, you must define what you're doing. The first step is identifying the product requirements.

Product requirements may be in the form of the features or functionality that the product must provide. Such requirements need to be concrete and testable. An acronym used to describe good requirements is SMART: Requirements must be Specific, Measurable, Achievable, Relevant, and Time-bound. And of course, any requirement that isn't SMART must be dumb. In many cases, projects go awry because the requirements are not specific or measurable.

Requirements may also be non-functional, such as quality requirements, staffing limitations, and cost constraints. And as I've said many times, requirements should be prioritized, and a line drawn between must-have requirements and nice-to-have requirements. Plan to deliver all of the requirements, but make it clear to your customer (your manager, your marketing department, your venture capitalist) that your plan can only guarantee the must-have requirements.

But so far we've only defined the input/output requirements, much like the first step in the product engineering.

Definition also includes identifying the tasks that must be done to achieve the requirements. We analytically break down the requirements into high-level deliverables, then decompose high-level deliverables into smaller tasks, and identify the interdependencies along the way. In effect,t the tasks are blocks in a block diagram. Peers review and inspect can be benificial in helping to ensure that we didn't miss any tasks; just as peer review is essential in product engineering.

Task definitions must also be SMART: Requirements must be Specific, Measurable, Achievable, Relevant, and Time-bound.


When people were "taught" (and I use the term lightly) planning, there was usually an emphasis on estimation. But good plans are not based on estimation; they are based on measurement, that is, quanitative data.

In the Define phase, we identified the individual tasks that need to be done. Now we must measure them. And how do you measure something? If we don't know how big something is, we compare it against something we do know (a yard stick). We don't always need a physical yard stick. For example, when we meet a person on the street, we know almost immediately how tall they are; we instantaneously compare that person against our own high (the yard stick), and if they are slightly taller or shorted, we can fairly accurately measure their height. When a person is significantly shorter than I am, I'll compare their height against my wife's.

Similarly, when measuring tasks, we must use a yardstick. We need to pick a task that we've already done, and compare it to this new task. If the new task is twice as big as the old task, then we know how long it will take.


In engineering a product, the analyze phase involved looking at the various functional blocks, and dealing with non-functional requirements such as timing and loading. When engineering a plan, we also must analyze the non-functional requirements and how they impact the tasks.

In my posting Know what you're doing (Part I), I used the example of a drive to the airport. We easily identified that all of the tasks (return rental car, bus to terminal, check-in, security, walk to gate) should take 30 minutes. But when we analyzed the situation, we realized we needed two hours -- there could be traffic, a breakdown, detours, lines at the rental car return, delays in security, and so forth.

In a more general sense, our analysis must take into account many things that might affect our team members' ability to complete the task, for example:

  • Team member efficiency. Depending on the development environment, training, how often you've done projects like this in the past will significantly impact your efficiency.
  • Team member availability. This is the team members' ability to spend time being productive. Reading email, attending general staff meetings, signing up for health care, taking Six Sigma training, reading the Boston Globe online all impact a person's efficient. A weekly, one-hour departmental staff meeting, for example, alone consumes 2.5% of your time. I typically assume at least 10% of a person's time is lost to inefficiency, at best.
  • Vacation and sick time. A typical employee may get 15 days vacation a year, and may be sick an additional 5 days. Twenty days of absence per year equates to 7.5% of a person's time.
  • Sometimes people are unable to dedicate 100% to one project. They may have worked on another product and need to lend support periodically. Or they may be experts in a specific field and need to help other projects in times of emergency. And generally speaking, the better the engineer, the more likely they will get pulled off to work on other projects. Let's assume they spend 10% of their time helping other projects.
  • Finally, there's planning for emergent tasks, the unexpected tasks that always arise. They may be as mundane as your PC dying or taking a training class to learn how to use a new development tool, or as critical as solving a race condition in some multi-threaded code that only happens once every three days. No one plans on these tasks, but we can plan for these tasks. On a small, low risk project, emergent tasks may be a small percentage (10%); on large or high risk projects, emergent tasks may consume half your team's time. For a general number, let's assume a typical figure that 20% of your team's time will be spent on emergent tasks.
After this quick, back-of-the-envelope analysis, we can see that team members may only be able to spend half of their time working on planned tasks. And in order to accurately measure the time it will take to complete a task, we have to know the person who will be working on it -- we have to know how productive they are, how well they work in the team environment, how well they know the task at hand.

I think analysis is the most critical step in engineering a plan, and it is often overlooked. New project engineers will sum up the task durations, measure how long it would take them to do each task, divide the total time by the number of people in the budget, and promise a product in X weeks. The product ends up taking 2X weeks or 3X weeks, and they just don't understand why.

As an aside, one of the best tools for analyzing a plan is CoCoMo. I've used CoCoMo in one form or another for over twenty years. An easy-to-use online version is available at


Design is the opposite of definition. In the definition phase, we decomposed a problem into tasks; in design, we construct a set of tasks into a plan.

But a plan isn't a schedule. It isn't a list of milestones. It isn't a gantt chart (and you don't want to get me started on gantt charts!).

A plan must address the who, what, and when questions about the project. It must also address the "what if" questions. Contingency planning is critical in the design phase For reference, see Everything I needed to know about project planning I learned from my little league coach.

When you're done designing your plan, you should be able to identify:

  • What will the project deliver, and when.
  • Who will be working on the project.
  • What task is each person responsible for.
  • When is each task expected to be complete.
  • How much is this going to cost.
  • What equipment and tools are needed.
  • Where (lab space, office space, etc) will the work get done.
  • What do you depend on from outside the project team.
  • What will you do if anything that can go wrong, does go wrong.
A well-designed plan answers all of those questions.


I'll cover plan execution in my blog entry, "Know what's going on".
Copyright 2007, Robert J. Hueston. All rights reserved.

Monday Apr 16, 2007

Know what you're doing (Part II)

Sun Tzu advocated: Know what you're doing. Figuring out what you're doing, that is, project planning, can be one of the biggest challenges for new project leaders. How do you go from what is often a vague problem statement to a well-defined plan? for delivering a project, with a predetermined set of features and quality, and a defined time?

The first step in planning is to recognize the goal of planning:

    A plan describes how you will deliver a product with a minimum set of functionality (features) and quality by a certain time with a specified maximum cost.
A plan is not a schedule (although a schedule will be part of most plans). The plan starts with how. How you will deliver a product.

There are four quantities a plan must define: functionality, quality, time and cost. Some people will group functionality and quality into one term; however, I insist on separating them because quality (the degree to which you guarantee there are no defects in the product) can vary widely across products. As an example, when I worked in aerospace developing jet engine monitoring and control systems, the quality requirements were very stringent, as even a minor defect could cause the deaths of hundreds of people. The cost, staffing and time to verify these embedded systems was very high (especially when you needed to test with a real airplane). On the other end of the spectrum I've worked on graphical user interface front-ends where most defects are, at most, just annoying, and the cost of verification is fairly low since defects are usually easy to spot, and require just my own workstation for testing. Quality tends to fixed for a product, but it may vary widely between product lines.

The interesting thing about the four quantities in a plan is that they tend to exhibit a constant mathematical relationship:

    functionality \* quality = time \* cost
In my posting Top-Down or Bottom-Up, I used the example of home improvement. I had a set of things I wanted to get done (functionality), some cash on hand (cost), I wanted to be done by Summer (time), and I wanted a good job but my house isn't exactly an historical landmark (quality). Each contractor put together a proposal, and those that offered higher quality or more features would cost more or take more time.

The problem with planning is that the equation is often over-constrained and out of balance. For my house, I wanted lots of features and high quality for a low price, and I wanted it now. Similarly, marketing or management may specify the functionality, quality, time and cost, but the equation just doesn't add up. It is the role of the engineering project leader to make the equation balance.

If you want more on the left side of the equal sign (more functionality, higher quality), it's going to increase the right side (take more time, cost more money). When management asks you to pull in the schedule (reduce the time), you need to turn around and ask them, "Do you want to reduce functionality, reduce quality, or increase cost (e.g., add staff, buy more equipment)." If they ask you to reduce schedule and cost, and quality is fixed, there's no choice but to reduce the delivered functionality.

This is getting a bit long, so I'll save for my next entry why (and how) engineers should be the best planners on Earth...

Copyright 2007, Robert J. Hueston. All rights reserved.

Thursday Apr 12, 2007

Know what you're doing (Part I)

Sun Tzu advocated: Know what you're doing. Clearly Sun Tzu knew the importance of planning for software development projects.

Occasionally I give an internal presentation on project planning, as part of a multi-day class for new software project leaders. As Sun has a widely disperse engineering work force, there's always a few people who fly into the Boston area for the class. My portion is usually on the last day, and I start my lecture with a question...

"Who plans on flying home today?" Inevitably, a few people have tickets for the evening flight. I pick one of those people at random and ask what time their flight is; a typical answer is around 6pm, the last flight to the West Coast. "So, let's plan your trip," I continue. "We're about 15 miles from Logan Airport, so it takes about 20 minutes to drive there. After returning your rental car, the bus ride to the terminal is about 10 minutes. And it takes another 5 minutes to check-in and 10 minutes to get through security. That's 45 minutes, on a good day. So you'll leave for the airport at, what, 5:15?" Laughter usually ensues.

The response is usually, "No! I'm leaving as soon as the lecture is over, 4 O'clock at the latest. If I left at 5:15, I'd probably miss my plane. There could be traffic, a breakdown, lines at the rental car return, or a delay at security."

"Who here has ever missed an airplane, even by just a minute" I'll ask the class. Maybe one or two hands go up. "Who here has ever missed a project deadline, even by one day." All hands go up. "So you plan your personal time better than you plan your software engineering projects?" At this point, a look of enlightenment usually overtakes the room. We do everything we can to avoid missing a flight, but we're not nearly as concerned about missing a project milestone. Missing a flight might cost us another night in a strange city, but missing software project milestones costs companies huge amounts of money in terms of development costs, delays, and time-to-market.

The flight's departure time is a contract, between the airline (they will not leave before the specified time) and you (you will be there before the specified time). The same is true of a project plan. A plan is not an estimate of when you might be done; it is a promise, a contract, of when you will be done. You can't always keep every promise, honor every contract to its fullest, but we need to treat milestone dates like promises, and do everything in our power to hit the dates. And as soon as we know the date is at risk and the promise is in jeopardy of being broken, we must let people know. When I promise my wife I'll be home at 6, she doesn't mind too much if I call at 5 and tell her I'll be an hour late. But she gets really angry if I just stroll in at 7 without any notice. If she knows I'm going to be late, she can change her plans. If I don't tell her, dinner gets burned in the oven and ends up buried in the backyard.

In one class, a person asked, "I see what you're saying. But how do you justify it to management when you identify 6 months of work, but you tell your boss that it will take 12 months?" The same way you justify to yourself leaving two hours before your flight time. You know there are risks driving to the airport, so you allow extra time. But you also plan for the case when the roads are clear and you breeze through security. Maybe you skip dinner and plan to eat at the airport (and if you're late, you may fly hungry, but at least you're in the air on time). Or you bring a book or your laptop so you can read or do work if you have extra time.

The same goes for project planning. You prioritize your requirements, your features, and separate them into must-haves and nice-to-haves. You create your plan for all the requirements, but only promise the must-haves. If you finish the must-have features early, you can do more of the nice-to-haves. But if you start running out of time, you drop the nice-to-haves from the list, low priority first. Returning the rental car, passing through security, and getting to the gate are must-have requirements; eating dinner and reading a book, while important, are still just nice-to-have requirements.

If you approach project planning with the same forethought as a drive to the airport, in the end you will deliver a project with all the must-have features, and maybe some of the nice-to-have features, and you'll deliver it when you promised.

So how do you create a plan? Stayed tuned for my next blog entry...

Copyright 2007, Robert J. Hueston. All rights reserved.

Tuesday Apr 10, 2007

Know yourself

Sun Tzu advocated: Know yourself. When leading an engineering project, it is imperative that you know the people on your team and their abilities, and you build a team with the capabilities to get the job done.

A modern army corps consists of several different types of divisions -- infantry, armor, cavalry, artillery, and support (here I use the term "division" fairly loosely. While infantry and armor are typically full divisions, artillery and cavalry may be deployed in battalions or regiments). Each division has its own abilities which complement and support each other. While we all know the importance of the tank in modern warfare, no army would be able to function if it consisted of only armored divisions. Without infantry, cavalry and support, armor would be bogged down, run out of fuel, and be easy targets.

A software engineering team is not unlike an army corps. The team must consists of people with a wide set of skills which compliment each other. In my experience, I have found that most engineers fall into one (or maybe two) categories analogous to army divisions. Creating a team with all of the right divisions, that is, the right combination of talent, is a critical responsibility of the team leader.

The following describe the types of engineers you would normally find on a functional software project team.

Corps HQ

No army corps would function without Corps Headquarters -- Corps HQ. HQ gives out orders, assigns objectives, and makes sure the corps consists of the right mix of cavalry, armor, infantry and support groups. Without HQ, an army corps would lose direction, and would degrade to a group of indipendent divisions, rather than as a coherent army corps.

In a software engineering project, the project leader is typically HQ; however, that isn't always the case. I've seen many cases where the team direction and leadership comes from an engineer, rather than the person designated as the leader. In those cases, the project leader may play a more administrative role, documenting decisions and plans, and allowing another person to give orders and assign tasks. This approach can work, as long as there is general cooperation among the people involved. Even an army HQ is not a single person -- it is a set of people who work together to lead the corps.

Sometimes the HQ role moves around during a project, for example, when the project leader takes vacation, or needs to temporarily work on a different assignment. The person on the team who steps up to fill that HQ role during a leaders absence is usually someone who will do a good job as project leader of the next project.


The armored cavalry regiment (ACR) in the US military is specially organized for reconnaissance and surveillance. They are generally equipped with armored vehicles. Being lighter and quicker than an armored division, the ACR can move quickly, find the enemy, and move on, allowing the armored divisions to prepare and engage the enemy in force.

On the other hand, the cavalry is not designed to engage the enemy on a large scale. They are lightly armored and if forced to stand and fight for a long time against a heavily armored enemy, they may be decimated. The cavalry must be kept on the move to be effective.

In engineering, the cavalry is the person who quickly spins-up on a problem, identifies the critical issues to be resolved, and perhaps figures out a prototype solution. These people are usually the first ones selected for "tiger teams" (which, after all, are basically ACRs) to address critical problems. Everyone knows the cavalry; there's usually a line of people looking for help outside their office. People go to the cavalry because they know they don't need to invest a lot of time explaining the problem; usually just a few words and a vague description of the problem, and the cavalry is off working on a solution.

While cavalry engineers are important to a project team, they must be used correctly to be effective. These engineers are best suited to short-term, high priority, and high pressure projects. When they are assigned large-scale development work, they can become bored, unfocused, and bogged down in details, and lose effectiveness. You might even find them going off looking for ways to help others in a crisis, rather than delivering their own work. A cavalry engineer, when not properly lead, may get labeled "renegade" or "loose canon;" those labels usually indicate poor leadership, not a poor performer.


Armored divisions, with perhaps 200 tanks and 200 armored fighting vehicles, are the workhorse of an army. An army corps might have three armored divisions, compared to one infantry division and one cavalry division or ACR. The armored divisions pack a strong punch, and can stand up to enemies for long periods of time, and often won't give up until success is achieved.

The armor on an engineering project are the people who produce the most. Given a clear set of goals, they analyze problems, write specifications, produce well documented and well tested code, and continue to do so for months, years at a time. They typically love working on projects from start to finish, and in their wake they leave a trail of high-quality products. These people aren't easily discouraged by problems -- problems are just more challenges to attack and overcome. Every successful project I've seen has a majority of armor on its staff. They might not spin-up on a task as quickly as cavalry, but they have staying power to see the fight through.

But tanks can't do everything. Some terrain cannot be crossed by tanks -- rivers are a serious obstacle, and require the engineer corps to build bridges. Advancing a tank column takes skill and planning, and cavalry can provide the critical intelligence to help plan their path.

Tanks must also be allowed to fight like tanks, using their speed and firepower to defeat the enemy. In Desert Storm, the US Army lost four tanks to enemy fire. Three were lost in some of the largest tank battles ever fought by the American Army; one of those tanks was destroyed while guarding a group of prisoners. Forced to move at the slow pace of walking POWs, the tank was easy prey for an anti-tank missile launched by a couple of soldiers in a jeep.

Engineering armor must also be allowed to fight like tanks, to work hard and be productive. If they become bogged down, by leadership indecision or technical obstacles, they may lose momentum, and in the process, lose effectiveness. It's critical for project leaders, with the help of cavalry, to chart a clear technical path for the armor to plow through. Armor must be well supplied and equipped; you don't want simple issues like a lack of disk space or not enough test equipment to slow down their progress. And once the armor has passed through and the major work done, infantry must occupy the landscape.


The role of infantry in the modern army is to attack unarmored targets, occupy and defend territory, and patrol for the enemy. Infantry moves slowly; however, it is thorough and can engage in house-to-house operations where armor is poorly suited due to limited maneuverability. Infantry can move to fully control an area, once cavalry and armor have destroyed major opposition.

In software engineering, infantry plays a similar role or occupying, patrolling and defending the territory conquered by the armor. Infantry engineers are very detailed oriented. Some of the key missions for infantry in software engineering are: design and code inspections, development in a support role, test development, and bug fixing. Where an armor engineer will get the code 99% right very quickly, the infantry engineer will get it 100% right, albeit, a bit more slowly.

Without infantry, your armor must take over this occupation role. Having highly productive engineers working on minor bug fixing is not necessarily efficient. On the other hand, infantry can play a big support role in the middle of development. As an armor engineer conquores a major problem, pieces often can be split off for infantry engineers to code up and test.

In some cases, infantry can be the junior engineers; however, there is a breed of engineers that are perfectly suited to infantry tasks. These people are very meticulous and thorough, but have a problem seeing the "big picture."


An army would come to a standstill without its support groups. No food. No ammunition. No fuel. No mail. No hospitals. While the support groups are not intended to engage the enemy directly, without their active participation, the enemy would surely win.

Likewise, an engineering project depends heavily on the support groups, such as:

  • Laboratory facilities (locations for equipment, network connections, power, logic analyzers, etc.).
  • Physical facilities (office space, desks, lamps, computers, printers, etc.).
  • Information technologies (file servers, disk space, CAD/CAE tools, network infrastructure, etc.).
  • Training, for new technologies.

One Size Does Not Fit All

Divisions of an army corps are identified based on their equipment and their training. An armor division can not easily recast itself as an ACR or an airborne division.

In software engineering, a label is not so easy to pin on an individual. One person may act as the cavalry for certain periods of time, and do it well, but prefer the role of armor. Another person, nominally infantry, could step up and tackle work normally assigned to armor. Some people can play any role you want; you just have to tell them what you need from them.

The roles I describe above are not meant to be pigeon holes, or branded on a person for life. They are only meant to describe the role they play on the project team, at any particular point in time. And to show that a project team needs a variety of talent, just like an army needs a variety of divisions to be highly effective.

Building a Fighting Force

One of the early responsibilities of a project leader is to build a project team. That team will typically include cavalry, armor, and infantry engineers, and will reply on support groups from around the organization. A team which lacks cavalry may not foresee technical issues and could get bogged down in the middle of the project. A team which lacks infantry may produce a product that is generally good but has a lot of little, annoying bugs. But when the team has the right balance of force, it will be most productive and successful.

In practice, a leader does not label engineers in this manner. But a good leader will identify the strengths of each team member, and understand the strengths, and weaknesses, of the team as a whole. Good leaders will say, "We need a person like that on our team" -- a cavalry engineer to scout out issues in advance, an infantry engineer to work on details in order to free up someone else, or an armor engineer to plow through work -- without even realizing the gap their trying to fill.

In order to build a successful team, you need to know your team -- know yourself.

Clancy, Tom and Franks Jr., Fred, Gen, Into the Storm, New York, 1997
Copyright 2007, Robert J. Hueston. All rights reserved.

Tuesday Apr 03, 2007

Leadership Roles and Enemies

In my last blog entry, Know The Enemy, I identified seven common enemies to a project. In my blog entry, "Beekeeper, Shepherd and Cowboy" I described three common leadership roles, asserting that highly successful leaders will exhibit all three roles at the appropriate times. In this blog entry, I wanted to relate the leadership roles to project enemies.

To summarize:

  • The Beekeeper is systems-oriented. He understands the product and the processes needed for success.
  • The Shepherd is people-oriented. He knows how to work with others, and how to work with his team, to build trust and get the most out of people.
  • The Cowboy is goal-oriented. He knows what he wants to do, and drivers everyone toward that goal.

Enemies, and the Leaders That Defeat Them

A. The Problem

When there's a problem to be solved, the leader must let his "Cowboy" lead the way. The Cowboy understands the problem, and quickly identifies a path to a solution.

B. Process Snags

The Beekeeper worries about systems and processes. They understand the processes that an organization places on its project leads, and works with the processes to achieve success.

C. Deal Makers/Deal Breakers

Deal Makers can quickly become Deal Breakers if they are not brought into the team. The Shepherd works well with people, seeks to understand their interest and needs, and incorporates their ideas thoughtfully into the project plan.

D. Organizational Miscues

Organizational issues typically arise from two things: A lack of knowledge of the organizations and processes, or an inability to work with leaders from other organizations. The Beekeeper knows the organization and processes. The Shepherd works well with leaders.

E. Quality

Maintaining quality is the job of the Beekeeper. He thinks about the system, and how to disassemble the requirements and assemble a quality product.

F. Inertia, Brownian Motion, Entropy and Chaos

No one addresses inertia and chaos like the Cowboy. In the real world, getting a herd of cattle to start moving, and continue moving in the same pre-planned direction is a main job of the cowboy.

G. The Hidden Enemies

Finding the hidden enemies requires a little of all three leadership roles. The Beekeeper is organized and has a process for finding the enemies (for example, scheduling period meetings, using brainstorming techniques to encourage ideas, documenting and tracking the issues once they're identified). The Shepherd works with his team to create an environment where they feel comfortable raising issues and concerns; people must trust their leader before they're willing to point out things the leader has failed to see for themselves. And the Cowboy forces the issues into the open, even when people are not fully willing to share their concerns.


Previously when I identified the three successful leadership roles, I stated that the most successful leader will exhibit all three roles. The enemies listed above show how one leadership role is not be sufficient to identify and combat all enemies. It takes a person who is a blend of all three roles.
Copyright 2007, Robert J. Hueston. All rights reserved.

Monday Apr 02, 2007

Know your enemy

Sun Tzu advocated: Know your enemy. In software project leadership, the first step in leading a project is to identify the enemy. The enemy may take several forms, and must be addressed in different ways. Once the enemies are identified, they can be scoped, and plans made to combat the enemies.

Potential Enemies

The following are several of the common enemies a project leader must fight. This is by no means a complete list; it is intended to start you thinking beyond simply writing code.

A. The Problem

I took a graduate-level discrete math course some years ago. The professor (I honestly forget his name) started the first class by saying, "Don't ask me how to apply this to the real world. This is a class in math. One plus one; that's math. One orange plus one orange? Well, that's physics." That lead me to realize my own definition of the physical sciences:
    Math is the study of numbers.
    Science is the use of math to explain the world.
    Engineering is the use of science to improve the world.

Engineering exists to solve problems and improve the world. So the fundamental reason for any engineering project must be to solve a problem. However, many engineering projects suffer because they don't know what problem to solve, or they solve the wrong problem.

There a numerous methods of identifying the problem. Requirements analysis. In-scope/out-of-scope charts. Prioritized features lists (with must-haves, nice-to-haves, and non-requirements), and so forth. This area of engineering is well discussed in modern literature (although still many projects fail to take the time to define the problem before they start), so I won't belabor it in this blog. But I will raise one point.

Rarely is a problem really solved unless the solution meets or exceeds the customers' expectations. In the commercial world it's impossible to talk to all the customers (and even if you did, they'd all have widely different expectations). However, I've found that reasonable proxies for the customer are the employees who work with the customers -- field service engineers, application engineers, etc. Typically, they meet enough customers to know, in general, what their expectations are, what minimum features are absolutely required, and what will really knock their socks off. I like to have a Service Engineer on my project team, fully engaged, attending every meeting, and intimately knowledgeable of the product we're developing. When a question comes up about how a feature should be presented to the customer, I turn to the Service Engineer (who in turn consults with other Service Engineers in the field) to choose the best option.

B. Process Snags

I've seen many projects suffer because they did not fully identify the process requirements in advance. Many process requirements stem from corporate quality initiatives; others are just common sense. Some examples of process requirements include:
  • Reviews (requirements, architecture, design, code, etc).
  • Testing requirements.
  • Certifications/qualifications.
  • Sign-offs and approvals.
I've actually been on projects where we got near the end of development with only a few weeks to go before release, only to learn that there was a process step we didn't know about -- a review to get approval to start development. And the committee that approved projects was booked for the next two months. Issues like that can be overcome by expediting the process, bumping other projects off the agenda, or scheduling an emergency review meeting, but those options bring added cost, both monetary and good will.

Understand and identify all of the process steps required by the company, customers, and government regulations, and plan for them in advance.

C. Deal Makers/Deal Breakers

In my own experience I have found that no one person can guarantee your project will be successful. However, I have found cases where one person can guarantee your failure.

Deal Makers are the people who can help you succeed, if you get them on your side. And if you don't get them on your side, they can often prevent you from succeeding by withholding critical approvals, or convincing those in approval positions to delay approval. They become Deal Breakers.

When looking for these Deal Makers/Deal Breakers, ask yourself:

  • Who will use the product?
  • Who will build it?
  • Who will test it?
  • Who will service and support it for customers?
  • Who needs to approve it?
  • Who supplies the money? The people? The resources? The lab space?
  • Who's opinion carries a lot of weight with the other people listed above.

Getting a Deal Maker on your side is often very easy:

  • Identify and engage them early in the project.
  • Share with them your thoughts and plans.
  • Seek their input, and thoughtfully apply their input or provide feedback why you could not.
  • Keep them informed of changes; make them feel a part of the team by making them part of the team.
  • If they feel like a valued part of the extended project team, they will work to make the project a success.
The enemy described here is not the Deal Breakers; they can either make or break your project. The enemy is within. If we fail to identify Deal Breakers and convert them to Deal Makers, we are our own enemy.

D. Organizational Miscues

Any large company has significant organizational communication requirements. For example at Sun, a project delivering a feature into Solaris must:
  • Notify technical publications to schedule man pages and other Solaris document changes.
  • Notify the internationalization department so they can schedule localization of all text output.
  • Work with the team that creates packages to make sure your new files are delivered and installed properly.
  • Contact the legal department if there are any patentable inventions.
  • Work with the organization responsible for the Solaris product to ensure that the project schedule aligns with Solaris build and release schedules, and the project satisfies all Solaris quality requirements.
and so forth.

Small companies, on the other hand, aren't so lucky. Their organizational communication channels almost always span companies -- publishing companies for their documents, translation companies for localization, companies that build and distribute their product, etc.

Regardless of whether the organizations are internal or external, they must be identified early and addressed in planning.

I've seen projects work toward their own schedule, only to discover they forgot about technical publications. They're almost done, but no one has written man pages or customer documentation. The product release gets delayed while waiting for the other organization, in this case Technical Publications, to catch up. Leaders who forget to communicate with other oragnizations will often blame the other organization ("We're waiting on Tech Pubs" or "Our ship date got delayed due to the Solaris release team."). The other organizations aren't to blame; the leader is, for failing to openly and fully communicate with the other organizations. A product is everything -- the software, the documentation, the release vehicle -- and a project leader is responsible for making sure all aspects of the product are coordinated, regardless of which organizations are involved.

E. Quality

Of course, "quality" is not an enemy; it is a goal. However, I couldn't find a suitable antonym to "quality." Technically, "defectiveness" is probably suitable; "non-conformance" is perhaps the most appropriate. Whatever you call it, allowing defects and nonconformities to advance through the development cycle is a dangerous and costly enemy.

I could easily write an entire blog entry (perhaps an entire book) on how achieve quality, but suffice it to say that the methods of achieving quality must be addressed early, planned, resourced and executed in order to ensure a quality product. Some of the common initiatives in a software development project include:

  • Requirements Inspection: Are the requirements consistent, complete, and address the problem?
  • Design Inspection: Does the design achieve the requirements?
  • Code Inspection: Is the code written to the design?
  • Unit Testing: Does the code execute as written?
  • Integration Testing: Does each unit work together as designed?
  • System Testing: Does the product perform a required?
I will go into the differences between inspections and reviews in a later blog entry.

F. Inertia, Brownian Motion, Entropy and Chaos

Often a problem is identified, but getting it fixed requires overcoming inertia -- it often can be far easier to live with a problem than mobilize effort to fix it, especially when that problem is not affecting you directly (though it may be affecting your customers, your service organization, and your sales organization). A good leader needs to be a motivator, and drive the team to overcome inertia and get the job done.

In high school physics, most people learn that inertia is the tendency of a body at rest to remain at rest. It takes some external force to overcome inertia and get the body to move in a new direction. The second part of the principle of inertia is that once a body is in motion it will tend to stay in motion in a straight line.

People, however, do not fully adhere to the laws of motion. While at rest, they do tend to stay at rest; however, once a team is in motion in a coordinated and well planned direction, if the driving force is removed, the team members tend to either return to a state of rest, or worse, they tend to exhibit Brownian Motion -- wandering off in random directions, sometimes pursuing contradictory goals, and often bumping into each other and expending pointless energy. Over time, entropy sets in and the entire team digresses into chaos.

The term "project leader" is somewhat of a misnomer -- a leader must not only lead, they must push (however, the term "project pusher" isn't much better). A leader with no followers is a failure. A leader must be able to provide the impetus to drive the team forward, and channel the energies of the team members in a coordinated manner to attack the problem at hand. Inertia, Brownian Motion, entropy and chaos are the enemies of leading people.

G. The Hidden Enemies

Have you ever been burned by something, and someone says, "I knew that was going to happen"? Well then why didn't you warn me?!?

One thing I've learned is to periodically (initially monthly, later quarterly) bring the team together and ask each and every person, "What do you worry about?" And I expect an answer, a list of enemies, from each person. Sometimes people will respond that they don't have any worries, but when you force them to think, the worries come out, and the enemies emerge. Sometimes they are just shadows; other times they turn out to be real issues that need to be addressed immediately. You can't fight the enemy you don't know about, but once they're out in the light, they can be defeated.

Not Enemies

In addition to the above potential enemies, there are a couple of things which are not enemies that I wanted to address here.

A. The competition

Perhaps in sales and marketing, competing companies can be considered enemies. We try to get products to market faster than the competition, that are better than the competition's products, and cost less than the competition's products (that is, cost us less to make, even if we charge our customers more). In engineering the competition is a competitor; however, it is not an enemy. Apart from industrial espionage and sabotage, the actions of the competitor do not affect our time to market, our quality, or our cost. We can blame the competition for our loss, but in reality we only have ourselves to blame.

Often, the competition can actually be a collaborator. Engineering is the application of science to solve problems. If the competition solves the problem first (and shares the solution), that can be to our benefit. Industry working groups, standards organizations, etc., are key examples of the collaboration between companies to solve a problem, while they compete to develop and release products based on that collaboration.

B. Process

In high school I was a high hurdler on the track team. My coach used to tell us, "The hurdle is your friend." The job of a hurdler is to run down the track, leaping cleanly over the hurdles. You have to work with the hurdles. If you try to mow them down, like they're the enemy, it only slows you down and you will lose.

Far too many leaders view process as the enemy. An enemy is something you attempt to defeat or avoid. When you work to defeat or avoid required processes, they you're wasting your energies.


Knowing the enemy implies more than just knowing who the enemies are. We must quantify their location, strength, capabilities and intentions. The military accomplishes this through surveillance and reconnaissance; the former is passive, while the latter is active. I think most people rely far too much on surveillance, for example, past experience, and observations from other people. Instead, we need to actively go out and patrol for the enemy.

Examples of active reconnaissance include:

  • Prototype critical aspects of the design. A quick prototype can uncover unexpected problems before too much time is invested in a detailed design.
  • Communicate with process owners to make sure you understand the process requirements, and how the process may be tailored to your advantage.
  • Contact organizational leaders and identify their requirements.
  • Engage deal makers/breakers to understand what it will take to gain their support and bring them into your team.
  • Motivate and orient the team, early and often.

Respond In Force

Once you know the enemy, you can prepare an offense. The information we learn about our enemy must be brought forward into the project planning. We must allocate resources to combat the enemies, all of the enemies. We must proactively engage and defeat the enemies and not sit back and wait for them to engage us. Success must be achieved; failure comes with little effort.
Copyright 2007, Robert J. Hueston. All rights reserved.

Monday Mar 26, 2007

Sun Tzu and the Art of Software Project Leadership

In a previous blog entry I asserted that we can learn a lot about software engineering leadership by studying military history. For one thing, software engineering has only existed for a few decades; people have been fighting wars for more than five thousand years. In essence, we can learn to be good leaders by studying great leaders in history, and no discipline has a better documented history than the military. One of my favorite historical military figures in Sun Tzu.

Sun Wu (or Sun Tzu, which translates as "Master Sun") was a military theorist who lived circa 400-320 B.C. Born in the Chinese state of Ch'i, he became a general officer in the state of Wu under King Ho Lu. He is best know for his book "The Art of War," a collection of essays with advice on the conduct of warfare. But few appreciate that Sun Tzu was also writing about software project leadership.

Samuel Griffith's translation of "The Art of War" includes a great biography of Sun Tzu. One story is particularly amusing. To paraphrase...

    Sun Wu requested an audience with Ho-lu, King of Wu. Ho-lu read Sun Wu's essays, and asked him for a demonstration using 180 young women from the kingdom. Sun Wu divided them into companies and put the King's favorite concubines in command of each company. Sun Wu ordered them to face right, but the young women just giggled. Sun Wu said, "If orders are unclear, it is the commander's fault." So he explained the orders five times. He gave the order to face left, and the women laughed.

    Sun Wu said, "If orders are unclear, it is the commander's fault. But when the orders are clear, and are not carried out, it is the officers fault," and he ordered the favorite concubines be beheaded. The king saw what was happening and sent a messenger to tell Sun Wu to stop, but he replied, "When the commander is at the head of an army, he need not accept orders from the sovereign." The concubines were beheaded, and the next favorite concubines were placed in charge of each company.

    Next, Sun Wu gave the order to face left, face, right, kneel, and march, and all the women followed his orders without a sound. King Ho-lu recognized Sun Wu's abilities as a command, and made him a general in his army.

"The Art of War" is organized into thirteen chapters, covering topics from estimation to secret agents. The advice is straightforward and practical, and with a little effort, can be adapted and applied to modern life, for example:

  • "Enlightened rulers deliberate upon plans; good generals execute them."
  • "Do not assume the enemy will not come, but rather rely on one's readiness to meet him. Do not presume that he will not attack, but rather make one's self invincible."
  • "There are five circumstances in which victory may be predicted: (1) If you know when you can fight and when you cannot, (2) If you know how to use all of your weapons, (3) If your army is united in purpose, (4) If you are prudent, and (5) If your generals are able and not interfered with by the sovereign."

"The Art of War" can be summarized in four simple directives (which I have liberally paraphrased for effect):

  • Know your enemy.
  • Know yourself.
  • Know what you're doing.
  • Know what's going on.

I don't believer there is a better summary of the art of software engineering project leadership:

  • Know your enemy: Identify the problem and the people who can make or break your project.
  • Know yourself: Understand the capabilities and limitations of your team.
  • Know what you're doing: Have a plan.
  • Know what's going on: Execute to the plan and monitor progress.

In my coming blog entries, I will expand on each of these directives and show how they apply to software engineer leadership.

[There are several translations of "The Art of War." I use Samuel B. Griffith, Oxford University Press, 1963. There is also an online text version here]
Copyright 2007, Robert J. Hueston. All rights reserved.

Tuesday Mar 20, 2007

How's the book coming?

A friend asked recently what was up with my book blog -- it's been a few weeks since my last post. Well, work has been busy and finding the spare time to blog is tough. But also I'm on the cusp of a new section in my book. The last few months have been part of the first section, which I originally called the "basics" but in hindsight I think a better term would be "philosophy." The second part of the book will be more practical, and address project definition, planning and execution; the tentative title I have for this section is "The Art of Software Project Engineering," and uses Sun Tzu's "The Art of War" to draw a parallel between military and engineering leadership. For the last few weeks, whatever free time I've had for blogging has been spent trying to organize part 2 of the book.

But since I'm blogging (and I'm still technically in part 1 of my book), I wanted to share another thought on leadership philosophy.

I was in my car listening to "Drops of Jupiter" by Train on the radio, when I heard the line, "Can you imagine... Your best friend always sticking up for you even when I know you're wrong." It reminded me of an incident when I was a young engineer...

We were investigating a problem in one of our products. The problem only exhibited itself during stress testing, and even then, only once every couple of days. Due to the cost of testing, we had a meeting to decide what testing should be done to further isolate the problem. I had a theory about the cause of the problem, and a test scenario I wanted to execute for two days to confirm my theory. A more senior engineer, also named Bob, dismissed my theory out of hand, and wanted to do a different set of tests for about a week. I tried to argue my case, but was consistently shot down. At the end of the meeting, our manager decided to fund my testing first, and if my theory was disproved, we'd follow up with the senior engineer's test proposal. I felt vindicated.

Outside the meeting I approached my manager. "So, you think I could be right?" I asked him. "No," he responded, "I'm almost positive you're wrong and the other Bob is right." I was stunned. Then why fund my testing? "Sometimes," he explained," my opinion doesn't matter. You feel strongly you're right. And I respect your opinion." He went on to explain that at worst it will cost us two days of testing, and that was worth the cost to explore my theory. And if he just funded testing that matched his own opinions, he would miss contrary opportunities.

As it turned out, it only cost a few hours of testing -- shortly after the test started, the failure recurred and my theory was shot out of the water. And in the end, the other Bob really was right. But it was worth it; it was worth the lessen I learned that day. A good leader will respect the ideas and opinions of his engineers, whether he agrees with them or not.

Tuesday Feb 06, 2007

Beekeeper, Shepherd and Cowboy

In my previous blog entry, I identified three primary roles of a good leader, which I called beekeeper, shepard, and cowboy. This entry explains the roles in more detail.


Bees are very self-sufficient creatures. They know what to do, and they are very eager to go do it. The role of the beekeeper in an apiary is to create an environment in which the bees can be productive.

The beekeeper must provide the physical resources that a colony of bees needs to produce honey, including a hive in which to build their honeycombs. And the beekeeper must make sure the bees have access to nectar-producing flowers. The bees are also a vital part of farming, so the beekeeper will work to establish a symbiotic relationship with agriculture, ensuring that the bees have plenty of nectar, and in turn the bees provide a pollinating service to the farmer.

It's not sufficient for a beekeeper to simply own bees. In order for bees to be productive, the beekeeper needs to ensure that the colony consists of the right members -- a queen, drones, and workers (both foragers and ripeners). Having the right combination of bees is essential to a productive colony.

A beekeeper cannot force a bee to make honey; he does not have to. Bees do what bees do naturally, and will remain with the colony and work hard as long as there are appropriate resources for them to do their job. A beekeeper cannot tell a bee how to make honey. If he tried, he'd probably get stung on the nose (I once got an email from a manager saying, "We have a lot of bugs. Let's assign engineers, get them root-caused, and implement fixes asap." Sometimes I wish I had a stinger!) A beekeeper does not count the number of flowers a bee visits each day; he measures the colony based on its output, the amount of honey being produced. A beekeeper cannot force a bee to stay with the hive. Bees are free to leave whenever they want. The beekeeper trusts the bees to return every evening. And when the hive is overcrowded, the beekeeper tries to prepare another hive so the bees can expand without having to leave the apiary.

Engineering project leaders need to look at the system in total -- the system that our project fits into, and the system of individuals that make up our team and our organization. As a beekeeper, the project leader acts as:

  • Systems Engineer: Looks at the overall design to ensure that the product fits within the system, just as the colony fits into the environment.
  • Team Builder: Works to ensure the project team has the right members, with the appropriate mix of skills in the correct quantities. Works with other teams and organizations to achieve symbiosis with other organizations.
  • Empowerer: Trusts the engineers on his team to do good work, and empowers them to make decisions.
This aspect of leadership is often overlooked by team members. When a project goes smoothly, no one notices that the leader has been working his tail (stinger?) off making sure that the project is well defined and the team has everything they need to do their jobs the best way they can. The best compliment I ever got was when a team member said of me, "He clears roadblocks so I can just do my job."


Sheep follow a shepherd because they know and trust him; and the shepherd knows his sheep. A shepherd typically does not own the flock. He's a hired hand charged with their care. He needs to make sure the sheep are safe, healthy and growing. He leads them into areas with grass to eat, safe from wolves and other predators. A shepherd knows each and everyone of his sheep, and when one is in trouble, he will leave the flock to go help.

A shepherd moves the flock as a cohesive group. He does not need to force sheep to follow him. He simply leads the way, and they follow because they know that he is always acting in their best interest. Being organized in a flock is also safer for the sheep -- they can look out for each other.

A shepherd will brag about his flock, and praise his sheep. He doesn't walk around saying, "I'm a great shepherd," for if he did, people would tell him to get back to tending the sheep. Instead he promotes his sheep, keeps them healthy, happy and productive, and brags about them, and as people admire the flock, they will also recognize the shepherd that tends them.

Engineering project leaders need to recognize that the people we work with are indeed human beings. It sounds funny to say this, but all too often one can start treating people as tools to get a job done, looking at head count instead of faces. As a shepherd, the project leader acts as:

  • Guide: Moves the team in an organized and calm manner through difficult situations, earning the trust of the team members.
  • Relationship Builder: Builds relationships with team members, and establishes relationships with other teams, management, and customers.
  • Mentor: Always looks out for the well being of the team members. Ensures they have the opportunity to grow, learn, and improve.

I was once at a conference and a presenter was talking about the importance of establishing rapport with others. "Get to know them as people, their likes and dislikes, their hobbies. And when you need help, they'll be more willing to come to your aid," she explained. One member of the audience (whom I'm ashamed to admit I actually knew personally) raised his hand and asked, "Won't they eventually see through this?" The presenter looked confused. "You know," he continued, "all this pretending to be interested in them." That illustrates the difference between most people and good leaders -- good leaders do not pretend to be interested; they develop genuine relationships with others.

Cowboy: Goal Oriented

The term cowboy can have a negative connotation: someone who works outside the rules. But here I'm referring to the real, working cowboy, who moves a herd of cattle across the countryside.

Before starting a cattle drive, the cowboy needs to select a route. He factors in all sorts of environmental variables -- snowfall, river depths, etc -- and plans where the herd should be on any given day, based on the number of miles they should be able to traverse of a given terrain. The cowboy looks at the big picture, and determines the best way to accomplish the goal.

Out on the drive, the cowboy keeps the cattle moving quickly in the right direction. Speed is important, but so too is keeping the herd together and organized. He prods the slow cattle to keep up with the herd, and makes sure the faster ones don't get too far ahead of their peers. And when cattle wander off in the right direction, he's there to lead them back onto the right path.

Always, the cowboy is thinking about the goal, and monitoring the herd's progress toward that goal.

Engineering project leaders need to identify the goal, and drive everyone toward a successful finish. As a cowboy, the project leader acts as:

  • Visionary: Understands the goal, and all the factors that stand between his team and achieving the goal.
  • Motivator: Encourages everyone on the team to keep moving forward.
  • Navigator: Keeps everyone moving in the right direction, never losing sight of the ultimate goal.

Some leaders are good beekeepers and shepherds, but lack the cowboy drive. They'll assemble a great team, and identify the goal, then sit back and hope something good happens. I call them laissez-faire leaders -- they exhibit a hands-off approach to leading their teams. Good leaders will know what every single team members is working on, where there are, and where they need to go next in order for the entire team to achieve its goal.

One Trick Ponies

I'm mixing metaphors here, but I have seen many examples of leaders which exhibit one of the leadership roles I've outlined above: one-trick ponies. They can be successful, but their lack of well-roundedness will always hold them back.

Consider the leader who is only a good bee keeper. They will understand the product, and perhaps have a great vision of what it should be, but they're unable to drive the organization toward the goal. They may understand how to work the corporate system, but lack the skills to form a close-knit team. They tend to be alouf and introspective. People often say of them, "He's got great ideas, but he never delivers."

The leader who's a shepherd builds a great team -- his people trust him and love to work for him. But the team itself tends to be unproductive. They don't have a key product they're working on. Or they wander aimlessly around, from technology to technology. Or they develop a great technology but are unable to get all the bugs out and it never ships. I worked at a company that fired a shepherd once; all of the engineers were irate because he was a beloved manager, but in all the years I worked there, he had never delivered a product.

Finally, the cowboy drives people hard and never loses sight of the goal. But in the process, he often overworks his team members; in turn, the team members tend not to trust him because they believe he'd betray them to achieve his own goals. This person also moves forward without regard to how the system works. From a technical perspective, they deliver a feature that doesn't play with the other product features. From an organizational perspective, they don't follow established processes, and are always seeking waivers, or bad-mouthing the system for holding them back.

Playing one, or even two of the leadership roles can result in a marginally successful leader. But when a person understands and plays all three roles, they perform at a much higher level.

Copyright 2007, Robert J. Hueston. All rights reserved.

Monday Feb 05, 2007

Herding Cats

There's a saying that managing engineers is like herding cats. Personally, I hate that saying -- it implies that engineers cannot be led, which is absolutely wrong. It is true that engineers typically cannot, and should not, be managed in the same way one manages a chain gang, with orders backed up by the threat of punishment; I'm sure it was people who tried this and failed who coined the phrase about cat herding. Engineers can be led.

To be a good leader of engineers, one has to take on the roles of a good engineering leader. One has to be willing to surrender their own behaviors, and adopt the behaviors of a good leader. Like an actor, the engineering leader plays a role, and often the role changes from scene to scene. I'm not suggesting that an engineering leader learn to act, to pretend, to be a leader; instead, they must learn how to act like a leader.

Three Roles of an Engineering Leader

In his book From Sage to Artisan: The Nine Roles of the Value-Driven Leader, Stuart Wells identifies nine roles for a leader, but I can never remember all nine. Instead I've identified three primary roles that the engineering leader assumes, and yes, I've come up with cute names so they'll be easy to remember:

  • beekeeper: system-oriented.
  • shepherd: team-oriented.
  • cowboy: goal-oriented.
More detail on each role will be provided in a later blog entry.

Why three roles, and not one? There is nothing quite like leading a team of engineers, and as a result, there is no single analog that reflects the engineering leader. Also, a leader's behavior has to adapt, based on the people and situation. At any given moment, the leader will be exhibiting one or more of the roles.

Studying Leadership Roles

Why should we study the behaviors of a leader -- aren't you either a good leader or not? No. Some people may take on the roles of a good leader instinctively, but the rest of us can learn the roles as well.

When I graduated from college I was a shy, introverted young engineer with little or no leadership abilities; only a desire to be an engineering leader. Over time I learned to act like a leader by watching the good engineering leaders around me. I learned how to give good presentations despite my abject fear of public speaking (and today some people even think I love an audience). I learned to approach strangers and get them to help when my nature is to avoid unknown situations. I learned how to motivate others, and how to motivate myself. I learned how to touch the human side of my teammates, and how to allow them to touch me. I learned that even though my gut might be telling me to keep my head down low in the foxhole, someone must stand up and lead the charge, and that someone was me.

I think any person, even with no natural leadership abilities, can grow into a good engineering leader. And a person like me who has gone through the transformation is probably in a good position to talk about the roles and behaviors a good leader must learn.

In addition to the three roles for a good leader, there are also dozens (perhaps hundreds) of other roles that we all play in our normal lives. As a leader, we can't always play the role we want to play; we have to decide to act differently in order to ensure our people and projects are successful. When another project leader calls up and says they can't deliver some dependency on their promised date, we might want to employ the angry, indignant customer role who yells some derogatory epithet and slams down the phone, but we need to play the beekeeper role. When we have a passive-aggressive engineer who won't follow the plan, we could play the wounded child role and go back to our office and cry, or the cowboy role and push them to complete their tasks. If we have a set of roles that we can use in these tough situations, we can stop, think about the correct role, and then move forward to resolve the situation in a constructive manner.

Roles in Action

As a negative example, consider the person -- we all know one -- who consistently plays the same, predictable role. My favorite I call the "grenadier;" he's the guy who figuratively throws a grenade into a crowded room (for example, he emphatically says, "This will never work!"), then when the smoke clears, he walks in to see who's left standing (he sees if anyone is able to defend their original position). It's a role that has limited usefulness, but the real problem is when a person plays the same role in all situations. I can imagine when this person gets home from work: He yells in the front door, "Dinner smells terrible!", then walks in, kisses his mother on the cheek and asks her what she's been cooking.

As a more positive example, consider a project I led a few years ago. Our software needed to interface with the software from another department, and my senior engineer had a proposed interface, call it option A. She met with the senior engineer from the other group, and he was very negative on her proposal -- it was overly complex, it would never work, and "everyone else" was using another approach, call it option B. My engineer didn't think option B was that great, and there appeared to be technical holes in the approach, but she went back to the drawing board and in a week or two we modified our design based on option B and got his personal OK (in writing via email). Then we held a formal review with multiple groups. When we reviewed this portion of the design, many people criticized the interface, pointing out the deficiencies in option B, and recommended a different approach, almost identical to our original option A. The engineer from the other group joined the choir, saying, "I told them this wasn't a good idea, and I think they should have used option A." My senior engineer seethed, but I encouraged her to refrain from commenting. We took an action item to investigate the two approaches.

Back at my office, my senior engineer let loose. She still had the email from him telling us to abandon option A and use option B. She wanted to send the email to his boss in order to point out his incompetence. I did too, but I realized that running with my emotions would have only served to initiate a finger-pointing argument, which could sour the relationship with the other group and hinder forward progress. More importantly, we needed the OK from this other engineer, and making him look bad, especially in front of his manager, was not going to help achieve our goal. Instead, I took a deep breath, and playing the role of the beekeeper, I drafted an email response to our action item saying that upon further review, we agreed that option A was superior, and thanked the members of the design review for catching this problem, all of which was true -- we did believe option A was superior and we were glad the reviewers helped finalize the design. I just chose not say anything further. This approach got us what we wanted: Approval to use option A (which is what we wanted all along), an official OK from the other group (which we needed to finish our product), and left us on good terms with the other engineer, with whom we often would have to deal in the future.

The story does have a happy ending, well, as happy an ending as a story about design reviews and software interfaces can be. The other engineer sent a private email to my senior engineer saying that he realized that he might have been the one to lead us down the wrong path, and he apologized for any inconvenience that might have caused. He was basically admitting, privately, that he had screwed up, something he could not do publicly for whatever reason.


Roles are a set of behaviors that we use when dealing with others. There are roles we play naturally, from instinct. We can also learn roles, roles that are constructive, supportive, and motivating in order to lead our projects and achieve our goals. When we understand the roles available to us, we are able to intelligently select a role that best suits the situation.

Copyright 2007, Robert J. Hueston. All rights reserved.

Bob Hueston


« February 2017