In my posting
what you're doing (Part III)
I introduced the concept of engineering a plan. One
of the key steps in engineering, plans or products, is analysis.
This is the second in the series of analytical techniques for plans: Anayzing deliverable definitions.
Analyzing a product (a circuit board, or a software library for example) usually involves
identifying the inputs, and how those map to outputs. Analyzing a plan involves the same thing.
In a product design, for example designing a circuit board, you might start with the inputs and outputs
of the board. As you decompose the design, you specify the inputs and outputs of functional blocks, the
inputs and outputs of chips, and perhaps if you're design includes FPGAs or ASICs, the inputs and outputs
of functional blocks within the chip.
Inputs and outputs must by well defined. It would not be acceptable to say a signal should try to
do its best to go high when an input goes low. Or a signal should change state "at some point" in
the future without providing a tight time limit. It's also fruitless to define output signals that
are not needed or used by any other circuit (we might realize that the person designing the chip
will probably need such a signal internally, but we don't spend time trying to guess how a chip might
work internally). [For software engineers, consider the old K&R method of declaring function names
without function prototypes. You had to guess what the inputs and outputs of each function
were, the compiler didn't enforce them, and if you got them wrong, you wouldn't know until the code
didn't work right. If you were lucky, the program would seg fault; if you weren't lucky, it would
misbehave in subtle ways. Tightly specifying function inputs and outputs, including the number of
arguments, their types, and whether they were const or variable was a key addition to the C language that
made it possible to engineer large-scale applications in C.] We all know that if an input or output
is not properly specified, the result will likely be a product that just doesn't work. We all know this,
yet when we engineer a plan, a common problem is vague definition of inputs and outputs.
For a project, inputs and outputs are deliverables: what you deliver from your project are your outputs;
what you need delivered to your project are your inputs. Deliverables must be specified with the same
sort of engineering rigor as electrical signals in a circuit design.
When we say that something is well-defined, we usually mean it is specific, measurable, achievable,
relevant, and time-bound: SMART.
Specificity in engineering is commonplace. But there also tends to be a lack of specificity when
engineering a plan, when we describe the outputs of a task, or the outputs of the entire project.
, will be delivered.
As an example, I've often seen the task deliverable "code complete" in a plan. "Code complete" is not
very specific. To one person, it may mean that they have finished typing the code, but they haven't
compiled it yet. To another person, it may mean that the code is written, compiles cleanly, has been
inspected by peers, and completes a set of unit tests (after all, how do you know it's "complete"
unless you've tested it?). Who's interpretation of "code complete" is correct? They both are, because
"code complete" is not specific and open to broad interpretation. Who's fault is it that the
deliverable does not meet expectations? The project lead's.
Another slightly humorous example I saw recently was a schedule item called "unit testing complete".
The owner claimed he was done, and indeed he had run all the unit tests on his code, but half of
them failed. The project lead felt "unit testing complete" meant "unit tests run and all tests pass".
Or when the project lead thought "requirements complete" meant the requirements document was
reviewed and approved, while the other thought it meant the document was written and ready to
start being reviewed.
As a general rule of thumb, if a deliverable has the world "complete"
in the definition, then it probably isn't defined completely.
One easy way to address specifics is to define a set of standing rules. For example:
||Requirements documented, all issues and TBDs resolved, and reviewed and approved
by all applicable parties.
||Design complete, meets all requirements with no outstanding issues, and has
been reviewed and approved.
||Code written, compiles cleanly with no errors or warnings, meets code style guidelines,
has successfully completed code inspection, has completed and
passed all unit tests, and has been checked into the source code repository.
||All planned tests have been executed, and either all tests have passed, or
bug reports have been submitted for all failures.
These are clearly just examples, meant to highlight how you could define "complete" in specific
terms. Being specific in a plan is as important as being specific in product design.
When analyzing a deliverable to see if the definition is specific, ask yourself: Would everyone
have the exact same understanding of the deliverable? If not, then the definition is not specific.
When we talk about something being "measurable," we're really saying that we can prove empirically
that it's true. In hardware design, a requirement like "the output should toggle really fast" is
not measurable, so we can not judge if an implementation actually meets the requirement. I could
just imagine the quality of a product if the hardware "should do its best to meet most of the
set-up and hold requirements" or if the software "should be small and execute fast." The product
would likely be unusable; the same is true for a plan that lacks measurable deliverables.
It's sometimes hard to separate the discussion of "specific" and "measurable." If you're not
specific, then rarely are you measurable. And in the previous section, I tried to provide examples
that were both specific and measurable. On the other hand, it is possible to be specific and
still not measurable.
For example, there could be a deliverable such as, "All unit tests execute
and pass." It is specific in that the unit tests must exist and both execute and pass. But
how do you know that the unit tests are sufficient? If I wrote one unit test,
executed it, and it passed, am I done? On the other hand, "All unit tests execute and pass
and provide 99% statement coverage as measured by gcov" would be specific and measurable -- it tells you what the completion criteria is and how to measure it. [gcov
is a tool that measures which statements of code have been executed.] Anyone could inspect
the gcov coverage report to see empirically that the deliverable met all its requirements.
When analyzing a deliverable to see if it's measurable, ask yourself: How do I know it is done?
If you're having trouble with the answer, then the definition of the deliverable probably isn't
It seems ridiculous to have to point out that deliverables must be achievable. But when I started
to think about this, unachievable requirements for deliverables are far more common than I had ever realized.
Take, for example, a requirement like, "The code will be complete and bug free before delivery
to QA to start testing." In large, complex systems, it is virtually impossible
to be bug free, let alone bug free before testing. So what's the problem with a requirement like
this? For one, developers will read it, recognize that it's unachievable, and
laugh it off without further consideration. Perhaps the requirement gets changed to, "The code
will be complete and mostly bug free..."; however, that's not measurable. Maybe the real intent
was, "The code will be complete and all bugs found during unit testing will be fixed or waived
by the manager of QA before delivery". This last requirement is achievable, measurable and
Be careful of requirements that have words like "all", "none, "never" or "always" in them. That
can be a flag that the requirement is not achievable. Note that in the previous paragraph I had
"all bugs... fixed or waived..." You may come across a situation where one bug is not
resolved. If the deliverable is defined as "all bugs fixed" then you'll have issues to deal with
while executing your plan (you'll be running around trying to invent a waiver process);
it's better to establish achievable requirements up front so that
execution can go smoothly.
When analyzing a deliverable to see if it's achievable, ask yourself: Am I 100% certain
this specific and measurable deliverable can be met? If you are unsure, you may be
dealing with an unachievable deliverable..
Obviously, deliverable definitions should be relevant -- don't specify the color of the paper
that a document should be printed on (especially if it's going to be distributed electronically).
But there's another form of relevance that is often forgotten during planning: Don't specify
deliverable outputs that aren't inputs to someone else. Seems obvious, but apparently it isn't
because I'm constantly seeing plans that include deliverables (documents, code deliveries, etc)
which are not needed by anyone else.
In many cases these are "internal signals," things that probably must be done as part of the
task working toward the deliverable, but they are not deliverables themselves.
I don't know how many times I've been at project reviews, and a project lead reports that a
deliverable, for example the stack usage analysis, is slipping its schedule. The VP will
undoubtedly ask, "Who is affected by this delay." And the answer is usually, "Well, no one.
It's just used internal to the team." If no one is affected if a deliverable is delayed,
then probably the deliverable is not relevant.
Of course, relevance may have many layers. From a "product" point of view, an internal deliverable
may not be relevant. Within the project team, if one team member does not deliver X (an internal
deliverable) to another team member, then the schedule for product deliverable Y could be at
risk. When working within the team, X is relevant; when discussing the project with external
people, then Y is relevant.
When analyzing a deliverable to see if it's relevant, ask yourself: Who would care if
this deliverable is delayed or canceled? If the answer is "no one," then it's probably
Time-bound means there's a due date, a time when the deliverable must be available. It's pretty
easy to tell if a deliverable has a time boundary; but it may take some analysis to tell if
the time boundary is a good one.
When we put together a schedule we usually have a date when something should be done. But a
plan is not an estimate of when you might be done, it's a promise of when you will be done.
[That sentence has become sort of a mantra with me and my teams. I can now say the first half,
and almost anyone who's worked with me will finish it.] A gantt chart is not a plan; a schedule
is not a plan. A plan is a promise, a contract.
If your best engineers got together and thought they would be done with a product by January 1st,
your schedule might say January 1st. But if your boss (manager, marketing, venture capitalist)
asked, what is your "drop dead" date, the date you promise you will be done and if you miss that date you're
fired? You might not pick January 1st. You'd pick a date that you were 95% confident that your
team would be done. Maybe February 15th.
A plan should document the dates you promise deliverables. It's fine to say you might
be done January 1st, but you're willing to promise February 15th. You'd continue
to work your team toward an early finish date of January 1st, but when you report on your
deliverables outside the team, you'd report on your confidence of hitting February 15th.
And if you finish January 1st, or January 31st, or February 14th, people may be pleasantly
suprised, and you will have met your promise.
When analyzing a deliverable definition to see if it's time-bound, ask yourself: Do I know
exactly when it's due, and can I promise to meet that date? If you're not sure of the
answer, then the time boundary may only be an estimate, and you need to make it a promise.
When analyzing the deliverables from a project, ask yourself if they are SMART. Ask yourself:
- Specific: Would everyone have the exact same understanding of the deliverable?
- Measurable: How do I know it is done?
- Achievable: Am I 100% certain this specific and measurable deliverable can be met?
- Relevant: Who would care if this deliverable is delayed or canceled?
- Time-bound: Do I know exactly when it's due, and can I promise to meet that date?
Some of this may seem pedantic. But the results of your analysis will yield better defined deliverables, and fewer surprises in the long run.
Other Plan Analysis Techniques:
Copyright 2007, Robert J. Hueston. All rights reserved.