Tuesday Feb 12, 2008

Why do they call it turkey?

Last Thanksgiving I was preparing dinner with my four-year-old daughter looking on. She had that look on her face, the one she gets when she's forming a difficult question. Finally she asked, "Why do they call it turkey?"

"Because it's turkey," I answered, matter-of-factly.

"No," she continued, "I mean, why do they call the food 'turkey' the same as the bird 'turkey'?"

It was then I realized that she had not yet come to the realization that the animals on the farm and the food on our plate where one and the same. So I started to explain. At one point I flipped the turkey onto its legs, had it walk across the counter doing a can-can and flapping its wings. Perhaps not my best moment in parenting, but it got the message across. She nodded her head in patronizing agreement, and wandered away.

I was worried how she would handle dinner with her new-found knowledge. Would she eat the bird? Would she become a devout vegetarian on-the-spot? Would she enter the dining room chanting protest songs and holding a sign that reads, "Let my turkeys go!"?

But things went fine. She had no qualms about eating the turkey on her plate. She finished one slice, and asked for seconds. I was somewhat relieved, until I noticed she hadn't touched her vegetables. "Eat your veggies, too," I reminded her.

She took one legume with her spool and rolled it around on her plate, examining it carefully from every side. Finally she paused and got that inquisitive look on her face. "Daddy," she asked, looking up at me, "why do they call it 'pea'?"

American Community Survey: Avoiding A Scam

Since my January 8th post ranting on the way the US Census Bureau has been handling the (in my opinion, high intrusive) American Community Survey and failing to safeguard me from fraud and identify theft, the Census Bureau has rolled out a new home page with a link called Are You in a Survey?

This new web page gives the following information and advice:

  • The address that the American Community Survey response should be sent to, so you can verify that the data you provide is going directly to the Census Bureau.
  • If you have received a telephone call from someone at the Census Bureau, and you have any questions, you may speak directly via telephone or e-mail with an employee of the National Processing Center (at 1-866-226-2864).
  • If a person claiming to be from the Census Bureau comes to your door, ask to see their identification badge and a copy of the letter that was sent to you from the Census Bureau. And if you have any questions about their authenticity, call the National Processing Center.

All of this is good advice, and helps to ensure that your personal information is only going to authenticated, and authorized, individuals. Interestingly, these are all things I suggested in my recent posts. You think maybe someone at the US Census Bureau read my blog?

Sunday Feb 03, 2008

American Community Survey: The scam that keeps on scamming

On January 8th I wrote American Community Survey: Big Brother or Scam? I eventually decided to fill out and submit the ACS; now I regret it.

Two weeks after mailing in the survey, we started to get phone calls from a mysterious 800 number. No name on the caller id, just a number.

After, literally, hundreds of these unanswered calls, we slipped. I was traveling and my wife answered the call, despite the anonymous caller id, thinking it might be from a hotel or calling card. But it was someone claiming to be from the Census Bureau wanting to "verify" our answers to the American Community Scam. If these calls were really from the Census Bureau, why would the caller id name be blocked? Obviously, this is a follow-on scam.

My wife was polite, but cautious. She confirmed the names of the people living in the house (information you can get from a multitude of sources). But when the caller started asking specific questions about our finances, she wisely stopped. "How do I know you're really from the Census Bureau?" she asked. After she refused to answer any more questions, the caller told her she would need to call another 800 number to verify his authenticity.

Yeah, right.

If I were to setup up a scam, I'd do just this. Specifically, I'd call numbers from the phone book at random and ask, "I'm calling about the American Community Survey you recently submitted." If the resident hadn't received the survey, I'd hang up. But if they had received it and returned it, I'd ask them to "verify" their answers, and proceed to ask their names, social security numbers, and financial information. If they refused to answer and demanded some form of authentication, I'd give them an 800 number they could call. When they did, my partner would answer the phone saying, "US Census Bureau, American Community Survey department. How can I help you." OK, we'd want this to sound like the government, so maybe he wouldn't be so polite. But anyway, he'd "confirm" that the previous caller was indeed from the government, and that they had to answer every question asked.

How many people would fall for a scam like that? I'd bet 90% of the US population, based on the number of people that forward me junk emails about Microsoft paying out $100 each time that email was forwarded.

Ironically, the US Department of Justic, has a web page on Identify Thelft and Fraud. On that page, the DOJ gives some good advice about avoiding identify theft at home, including:

  • Start by adopting a "need to know" approach to your personal data. A person who calls you and says he's from your bank doesn't need to know information if it's already on file with your bank; the only purpose of such a call is to acquire that information for that person's personal benefit.
  • If someone you don't know calls you on the telephone and asks you for personal data -- such as your Social Security number, credit card number or expiration date, or mother's maiden name -- ask them to send you a written application form.
  • If they won't do it, hang up.
  • If they will, review the application carefully when you receive it and make sure it's going to an institution that's well-known and reputable.
Based on the guidance given us by the Department of Justice, it's clear that we should all discard the American Community Survey if we receive it in the mail, and hang up when they call.

Wednesday Jan 09, 2008

COMDEX 1996

A friend of mine makes an annual pilgrimage to CES in Las Vegas each year. The last trade show I went to in Las Vegas was Comdex. OK, it was a while ago, 11+ years I think, but still I went.

Bill Gates was the keynote speaker that year (as usual). He talked about a brave new world where your PC would be your personal slave, and something called Bob(tm) would do your every bidding. The example was something like... you ask Bob to get you tickets to the opera, and while you're off living your life, Bob finds the best priced tickets, in your preferred section of the Opera Hall, and while he's there, he also books dinner reservations at your favorite restaurant in the Theater District. I recall walking out of the auditorium and wondering if I, a Bob myself, had to worry about paying Microsoft royalties whenever I signed my name. Thanks goodness Bob never took off; as it is, I hate that stupid little paperclip person who tells me I'm using Microsoft Word all wrong -- what an annoying little pest he is!

One of the hot technologies that year was the DVD. It promised to provide 2.5 hours of video, in multiple languages, all in the form factor of a CD. Of course, back then, DVD players cost hundreds, if not thousands of dollars. But after watching a demo of a DVD on a big-screen TV, I was sold. I ran right out and got a DVD player within five years, when they cost $200. Last year I bought a DVD player for my mother-in-law for 35 bucks. Of course, now the video technology is Blu-Ray vs HD-DVD, and the future isn't as clear.

Another new technology being promoted that year was digital cameras. Why in the world would anyone want a digital camera, I wondered. They were huge -- bigger than my old Fujica ST605N SLR (which I still have and use), low resolution (2 megapixels was the norm, not enough for a crisp 8x10 color glossy with circles and arrows and a paragraph on the back of each one), and expensive, easily costing over $1000 for most models. I think it was another 6 or 7 years before I went digital myself.

At the time, I went to the show representing PictureTel, a major high-end videoconferencing vendor at the time. We were telling everyone how video would change the world, business travel would be a thing of the past, and life would be better. Life is better, for me at least -- I quit PictureTel before they went bankrupt and their hollow husk was bought by Polycom. Of course, maybe the people at PictureTel were right, video is changing the world. It's just being done using commodity webcams on laptops and PCs. Seeing the future, and capitalizing on your vision are clearly two different things.

Tuesday Jan 08, 2008

American Community Survey: Big Brother or Scam?

A couple of months ago, we received a mailing, purportedly from the US Census Bureau, called "The American Community Survey." It's more than a census; it asks detailed personal and financial questions which, quite frankly, include things I wouldn't tell my own mother, let alone the US Census Bureau.

Besides asking the address, the names of everyone who lives here, and their birthdays (ideal information for identify theft), it asks questions like:

  • Race of each person in the home. (I didn't even think that was legal to ask!).
  • How many bedrooms are in this house. (What, are they planning on moving in?)
  • Does the house have running water? Hot water? A flush toilet? (Obviously they plan on staying for a while!)
  • How many vehicles are kept at the home? (They must be bringing their own car.)
  • Last month, what was the cost of electricity for this home? (I hope they plan on splitting the cost of utilities while they're staying with us.)
  • Is this a house, apartment or mobile home? (Beggars can't be choosers, I say!)
  • Does the monthly rent include meals? (It's a house, not a B&B!)
  • What were your wages, salary, commisions, bonuses or tops from all jobs, interest, dividends and rental income, accurate to the nearest dollar. (Do they promise not to compare with the IRS?)

The instructions state "The law requires that you provide the information asked in this survey to the best of your knowledge." (emphasis not added by me). On the other hand, I got an email recently that required me to provide my name, credit card number and mother's maiden name to some eBay-look-alike web site; I didn't fall for that one either. So I read the survey carefully, then tossed it in the recycling bin.

Then I started to get the phone calls.

Of course, I get phone calls all the time, from people claiming to be with the government, with the UK National Lottery Commission, a Swiss probate lawyer for my late, apparently estranged great uncle Harold Steinman who recently died and named me as his sole heir, and even representatives from God himself (why they need to use a phone, I'll never understand). This is what caller ID is for.

After two more postcards, and another copy of the survey, I started thinking, hey, even if this does look phony and smell of a scam, maybe this really is legit. So I went to the US Census Bureau web site to see if there was anything about an "American Community Survey" for 2007. Nope, nothing. There was a survey in 2006, but no mention of a survey in 2007. No way to confirm that this survey is legit.

I checked the address on the postage-paid envelop:

    DIRECTOR
    U.S. CENSUS BUREAU
    PO BOX 5240
    JEFFERSONVILLE, IN 47199-5240
Clearly, these spoofers don't know that the US Census Bureau is in Washington, DC! Plus the all-caps style is a dead giveaway of spammers. I also checked the Census Bureau web site, and they don't even list an office in Indiana; the midwest regional office is:
    U.S. Census Bureau
    Chicago Regional Office
    1111 W. 22nd Street, Suite 400
    Oak Brook, IL. 60523-1918
Even if the survey is real, maybe some scammer repackaged it with their own self-addressed envelop? Maybe all of the questions are real, but I'm sending the information to some theif in Indiana.

The instructions include an 800 phone number. But I learned long ago that if you call an 800 number, your phone number is transmitted to the callee, even if you have caller id blocking set up. Telemarketers use this to capture your phone number, and map street addresses to phone numbers. (I know; I had a friend, a software engineer, who worked for a company that did just that. Her specific software project was designed to call people at all times of the day, just to find out when you answer your phone. That way, they could sell your phone number and the times you're most likely to answer to other telemarketers. She eventually quit her job out of guilt.) And if I did call the 800 number and the guy on the other end said, "Ah, yeah, sure, this is the government. Please send us all your info stuff.", should I really believe them?

At this point, I'm starting to think that maybe, just maybe, this survey thing is legit, but the goverment is entirely inept and clueless about authentication and identify theft. If they really want me to fill out this survey, or the 2010 census in two years, they really should:

  • Provide a way I can authenticate that the survey came from the US government. Giving me a phone number is useless; anyone can get a phone number these days. Instead, the instructinos should provide something more authentic, like the URL of a web page, based off of census.gov, that confirms the survey is authentic.
  • Provide a way I can ensure that my data is really going to the right authorities, for example, on the web site list the address that should be on the return envelop.
  • Encourage, no mandate that everyone visit the web site, and verify the address on the envelop before they mail their response! Anything less is just encouraging people to believe whatever they get in the mail with an official-looking seal; it's tantamount to abetting identity theft.
  • Allow people to fill out the survey on the web. Personally, I trust ssl encryption far more than I trust my local mail carrier. On the other hand, I don't really trust the government to secure their servers, so maybe that's a bad idea, too.

Finally, I thought to google '"po box 5240" jeffersonville', and got a hit. Looks like this is a real survey from the Census Bureau, albeit conducted in one of the most shady, disreputable, and hard-to-authenticate manners possible.

In an age where identify theft is a serious business, the US Census Bureau should be keenly aware that the information they process is highly confidential, and a ripe area for theives to exploit. Clearly, based on my personal experience, they haven't gotten that message yet.

Wednesday Nov 14, 2007

Panambic

I'm trying to invent a new word: panambic [puh-nam-bik].

Its origin is the phrase "PAy No Attention to the Man BehInd that Curtain", a quote from The Wizard of Oz (1939). When Dorothy and her chums return to see the Wizard, they are faced with the image of a giant head surrounded by flames. But Toto the dog notices a curtain and pulls it open, revealing the man behind the Wizard. "Pay no attention to the man behind that curtain," says the Wizard, trying to draw attention away from his exposed true self. You also may have seen it spelled PNAMBC.

Panambic is such a useful term, and can be applied in many ways to emphasize that the underlying mechanism is irrelevant to the outward behavior; in other words, what matters is what you see, not how it works. I prefer to use it as an adverb, as in, "The underlying mechanism is panambic."

I first heard the term "panambic" back in 1994. I was working for a large videoconferencing company at the time. They made systems the size of large microwave ovens that cost $20K each, but realized that the market was moving toward lower-cost set-top boxes. They knew they could reduce their system to fit in a small form factor, but wanted to start getting a feel for customer interest. So a small team set off to build up a mock-up: It was a standard cart with doors on the bottom and a TV on top. Atop the TV was a small box the size of a VCR with a movable camera on top. One of the developers plugged the cart into an AC outlet and an ISDN jack, and the videoconferencing system came alive. They placed calls and demonstrated the high-quality audio and video. Everyone was amazed. I asked the designer how they built a functioning prototype so quickly, and his answer was simply, "Panambic!" Then he opened the doors of the cart to reveal one of our large videoconferencing systems concealed in the base. The set-top box was nothing more than a hollow plastic mock-up. The camera was real, but the wires all led down the back to the expensive videoconferencing system. Of course, panambicism can back-fire; once executives saw the working mock-up, they expected a real, shipping product in short order!

(Years later I saw a similar situation depicted in a Dilbert comic strip. I wondered if this sort of thing happened often. Or did one of my videoconferencing colleagues contact Scott Adams.)

I once had a field service engineer file a bug (a bug!) that complained that the software accomplished something he thought was impossible and couldn't understand how the software did it correctly. I simply closed the bug with an evaluation saying, "It's panambic." I suspect that poor field service engineer is still wondering what that meant.

Thursday Nov 08, 2007

People are Like Op Amps

A reader of my blog had a question about punishment and reward and how it relates to professionals. The quote he referenced from Sun Tzu's The Art of War was, "Punishment and reward should be handed out without delay." I agree whole heartedly with Sun Tzu, but my response used an op amp analogy...

Op Amps

Electrical Engineers learn early in college about op amps: The output Vout is equal to the difference of the two inputs (V+ and V-) times a gain G (in an ideal op amp, the gain is infinite). The behavior of an op amp is sometimes written in the form of an equation:
Vout = G (V+ - V-)

Almost all practical op amp designs work on a feedback loop. The most basic op amp design simply takes the output Vout and feeds it back to the negative input V-: a unity gain amplifier with negative feedback. If the output of the op amp is higher than V+, the negative feedback means V- will be higher than V+, and the op amp output will be drive to a lower voltage. If the output is too low, the negative feedback will cause the output voltage to increase. Very quickly, the op amp will drive its output so that Vout exactly equals V+, and tracks V+ as it changes over time. A very nice, simple, and stable design.

Without negative feedback, if the op amp output is drifting high, it will continue to drift higher and higher. Eventually, the output will bare no relationship to the output, and will probably saturate at one rail or the other.

Care must be taken to avoid delay in the feedback loop. Take for example the case where the output is high, so the negative feedback drives the output lower, but the negative feedback doesn't arrive until later. When the negative feedback arrives, the world has changed and the output is now too low. The delayed negative feedback will drive it further in the wrong direction. If this continues, the op amp output may oscillate out of control, swinging from one rail to the other. I think 90% of my Introduction to Circuits course centered around designing stable feedback loops.

Another thing I learned about as a young electrical engineer was noise. Noise in the feedback loop can be fatal. I once tried to design an adjustable gain amplifier using a potentiometer mounted on the front panel to set the resistor divider in the feedback loop. The potentiometer itself is very noisy as you turn the dial and the wiper contact skips along the surface of the resistor. And the wires from the circuit board to the panel are like an antenna picking up every AM radio station in the area. This noise in the feedback loop really screws up the circuit because the op amp misinterprets the noisy feedback, and changes its output, amplifying the noise instead of the signal.

People

People are a lot like op amps.

A person who is allowed to work without any negative feedback will probably go off and do whatever they want, without regard to the goals of the organization.

A person who receives delayed negative feedback may end up getting confused. Why are they getting negative feedback now when they've been doing the same thing for the last 12 months? Or the negative feedback may arrive when they're doing the right thing, causing them to oscillate in their behavior.

And noisy feedback is the worse. I've encountered a number of engineering managers who don't know how to give clear, noiseless, feedback. I've seen managers deal with "problem" employees by giving them satisfactory ratings, but then assigning them boring tasks. Or allowing them to finish a task, and then assigning the same task to someone else to re-do. Nothing screws up a person, or an op amp, more than noisy, confusing feedback.

By negative feedback I mean, of course, any constructive feedback on a person's performance. Goodr feedback can come from anyone, in almost any form, as long as it's clear and immediate. As an example, peer code reviews are a great form of negative feedback -- a software developer quickly and clearly gets feedback on the mistakes they have made in their code, hopefully they learn from the mistake and adjust their behavior accordingly.

An Example

The most clear form of negative feedback in my own career happened when I was about two years out of college. I was working for an aerospace company on a proposal for a major customer. My engineering team and I came up with a good proposed design that met all of the customer's requirements, and we prepared a presentation to give to representatives from the rest of the company -- engineers from manufacturing, sales, quality assurance, etc. I got up to present our design to a room filled with engineers twice my age, full of pride and self confidence. After I presented the design, I was peppered with questions from one particular quality assurance engineer: Why did you pick that processor chip? Did you consider using epoxy instead of paint for the chassis finish? What was the cost trade-off using an extrusion versus a machined chassis? Project X used those connectors and had problems; did you investigate the root cause of their problems to ensure it won't be a factor on this project? And on and on he went. The only answers I had were "ers" and "uhs". I was utterly and completely humiliated. I wanted to just crawl in a hole and die.

After that presentation I realized that in the real world, it wasn't good enough to have a solution, you had to be able to show that it was the best solution. You had to show that you gave every possible alternative it's due diligence to ensure that you didn't miss an opportunity. And you didn't just have to be right, you had to prove you were right. I learned more about proposals and presentations in 15 minutes from that annoying reviewer than I had learned in my career to that point (and probably since, too). It was negative feedback that changed by behavior in a very positive way, and for the rest of my life.

A few years later I was working on another proposal, and it just so happened that the same quality assurance engineer was a reviewer. I don't think he even remembered me from the last time, but I certainly remembered him. This time I was prepared, I showed that my team had done a thorough job, and convinced the reviewers that we had come up with the best possible solution. There were few questions (none hard), and after my presentation the quality assurance engineer said in passing, "Good presentation." His simple comment meant more to me than anything.

Summary

People can learn from mistakes, but only if those mistakes are painful; only if there's negative feedback. If there's no pain, the lesson goes unlearned. Consider the dog that pees on the carpet, and as a result gets a treat. They're not going to learn it's a mistake. But the dog that gets swatted on the nose will remember the pain, and avoid making the same mistake.

During war, mistakes mean death. So in order to train recruits to not make mistakes, the military use slightly less-negative feedback by inducing pain -- more push-ups, running longer with rifles over your head, or sitting in the brig. Punishment and reward meant far more to Sun Tzu than to the ordinary engineer. But it's still important.

Of course, with engineers you don't need to induce pain. Peer review, gentle criticism, performance reviews and even public humiliation can be quite effective. And of course the feedback must be immediate, and free of noise.

Monday Nov 05, 2007

Plan Analysis Revisited: The Power of Laziness

On my blog Plan Analysis: Smart Deliverables, someone posted the comment:
    Bob, great article, and I agree with most of it, but I have one comment. It seems to me that working your team to one date while promising externally another date could cause your team to be somewhat lax about the early date since they have some built in slip time. So how do you motivate your team in this sort of situation? In my experience human nature is to procrastinate right up until the drop dead date.
I was going to reply in a comment, but found my reply was big enough for a post.

The author of the comment notes that "human nature is to procrastinate right up until the drop dead date". This underscores a fundamental principle which I believe drives all human (and more specifically, engineering) action: people are lazy. To take it one step further, in another blog I pointed out that every engineer I've ever met falls into one of two categories: smart and lazy, or just plain lazy.

The key to successful leadership is harnessing that laziness, and using it for good. Or at the very least, using it for your own profit. But how?

Well, consider that every day we engineers get up, some of us shower, and we all go off to work. Work seems to be the antithesis of laziness, but it really isn't. We know that if we don't go to work, life would be harder. Harder? What could be harder than sitting in an air-conditioned office, in a comfy swivel chair, surfing the web and drinking Diet Coke all day? Lots of things. If we don't go to work as engineers, we'd have to get a real job, where they make you work and sweat and get callouses. Laziness motivates us to work.

And laziness drivers us to exhert a minimal of effort, which, when you think about it, is good for business. The less effort we spend at each task, the more efficient we are.

Back to the original question... How do you motivate a team to deliver earlier than the absolute latest minute? Appeal to their laziness; show them that if they fail to deliver on time, it will mean more (and harder) work for them in the long run. Let's take an example...

Some years back I was leading a project that included a significant change to a Solaris device driver to support new hardware. The plan was to include the driver rewrite in Solaris 9 since the new hardware would be ready to ship about three months after Solaris 9 is first released. The schedule was tight -- Solaris 9 locks down about three months before it ships, so we had to have the driver done six months before the hardware was product-ready. The responsible engineer was in a panic, and couldn't understand why I wasn't. I explained to him...

If we fail to make the Solaris 9 cutoff, we could include the driver in the first quarterly update of Solaris 9 which will ship at about the same time the hardware is ready. New features are allowed at the early part of an update release, so moving to an update release meant we'd have an extra two to three more months for development and testing. But with that option comes a lot more work: in order to ship in an update release you have to do more testing on your own, fill out more forms, write more reports, do presentations to committees, and request approvals from review boards. Releasing the software in an update release would give him more time, but would take a lot more work.

And if for some reason we couldn't make the quarterly update, we could always release an unbundled patch. But releasing patches outside of the quarterly release process means even more work: more committees and review boards, and you become responsible for doing regression testing on every platform that could possibly be affected by the patch. It's a huge amount of work, but it would buy as an extra month or two.

So, I wasn't worried at all about shipping the software in time to support the hardware; the only question was how much work we'd have to do. The longer we took meant more and more work. The easiest thing to do would be to get the software done early, in time for Solaris 9. The laziest thing to do would be to work fast and hard.

Obviously, every person is motivated slightly differently, and every person responds to pressure differently. One key skill in a project leader is knowing the people, and understanding how they tick. From there you can find the way to motivate them. I think it was Don Rumsfeld who said, "Leadership is by consent, not command," and Eisenhower said, "Leadership is the art of getting someone else to do something you want done because he wants to do it." In the end, you have to find the best way to motivate each person, that is, the best way to appeal to their sense of laziness. Show them that working hard and meeting your early deadlines is the laziest thing they can possibly do.

Wednesday Jul 25, 2007

Where have I been?

It's been about 3 months since I put some serious work into this book. I haven't been slacking; I've actually been spending my free time writing in my other blog, The Secrets of Olympus.

For the last three years, I've worked on the SPARC Enterprise M-class server line, a joint development project between Sun and Fujitsu. It was a lot of work, and a lot of new and interesting challenges. So I wanted to document some of the neat little things that are special about the SPARC Enterprise products.

Now that SPARC Enterprise is shipping and new development is ramping down, I promise to get back to the book. I've already started the last chapter ("Know What's Going On") of the second section ("The Art of Software Project Leadership"). After that, one more section to go!

Wednesday Jul 11, 2007

Tape Sculptures

A friend forwarded me an email with pictures of tape people -- sculptures of people made from ordinary packing tape. Here are a couple of my favorites:

After some googling, I found the web site tapesculpture.org dedicated to tape sculture, with some more interesting photos, and a how-to guide to making your own sculptures.

Thursday May 24, 2007

Table of Contents

Those who read this blog know that I'm working on a book, what I've called a field handboook for project leaders. For the last six months I've been writing the book one blog posting at a time. I'm about half way done, with over 70 pages and more than 30,000 words written so far. At this rate, I should be done by the end of the year.

I was telling someone that my blog was a book. But a book should be read from beginning to end, and blogs tend to be organized in reverse chronological order (and finding the first post can take more than a few clicks). So I thought it might be useful to publish a table of contents, with links to the posts. If nothing else, it's much easier to scan. If this works out, I'll see if I can embed the table of contents right into the masthead.

OK, here's the title page and table of contents of the book...


 
 
 

Tactical

Leadership

The philosophy, art and science of software project leadership.
 
Robert J. Hueston
 
 
 


Table of Contents


Copyright 2007, Robert J. Hueston. All rights reserved.

Friday Apr 27, 2007

Plan Analysis: Ideas, Inspection and Intuition

In my posting Know what you're doing (Part III) I introduced the concept of engineering a plan. One of the key steps in engineering, plans or products, is analysis.

This is the fourth in the series of analytical techniques for plans, and includes three fairly simple items to finish out the list: Ideas, Inspection and Intuition.

Ideas

As a young engineer, fresh out of college, I recall getting a small project from my boss. I worked on it for a while and then got stuck. I struggled with a specific problem for a day or two, at which point I felt defeated. In disgust, I went back to my boss to tell him I couldn't handle the task. He was upset, to say the least. "Why didn't you ask me earlier?" he asked. He already knew the answer. Engineering is not a college test or term project. He told me that the best engineers are often lazy, and lazy engineers borrow (Copy? Steal?) ideas from others. It's more efficient to find a solution that already exists and works well, instead of inventing a new solution for every problem. In school, you're rewarded for doing your own work and not copying from others; in engineering, you're rewarded for copying as much as possible.

When working on a plan, getting ideas and information from others, especially more experienced people, is important. There are many ways to get ideas -- reading others' work, asking, brainstorming. Just don't feel like you have to solve planning issues in a vacuum.

Inspection

Everyone makes mistakes. We're human. We forget things. We're not always thorough. We don't always think everything through. Every process must assume human error, and work to ensure that the cost of human error is minimized.

When engineering a product, it is common to invite peer engineers to inspect the design. Another engineer may look at a design and identify logic flaws, raise questions about the design's tolerance to errors, or just ask questions that cause the author to re-think his own work. When engineering a plan, we should have a process to invite peer project leads to inspect the plan. Like a code walk-through or a schematic page-turner, the plan author and inspectors should walk through the deliverables, tasks and measurements that make up the plan.

In my blog postings, you may have noticed I avoid the the word "reviews" and instead use "inspections." The reason is partially semantic: Dictionary.com, for example, defines "inspect" as, "to look carefully at"; while "review" can mean, "a general survey of something." When you hold a "review" participants may feel invited to casually scan the material; when you hold an "inspection" participants tend to feel more of an onus to pay close attention and review the material in fine detail. [In a later posting I'll blog about an effective inspection process.]

Beware that your boss (manager, marketing, venture capitalist) is not a peer. Your boss's goal may be to encourage you to reduce cost and reduce time to market, while also increasing product features and holding firm on quality. That's their role -- to try to get more for less. The project lead's role, on the other hand, is to engineer a realistic and achievable plan. On several occasions I've had project leads come to me when their manager has told them that a plan's schedule was too long, and they needed to pull it in. They naively believed their manager was trying to help come up with a better plan. They weren't; they were trying to get the plan to align with some other strategic milestone. If the project lead has done a good engineering job on the plan -- they've done their analysis, used independent methods like CoCoMo to confirm their measurements, and have had peers inspect their plan -- then the only response to their manage should be: If you want to reduce the schedule, we either need to add people or drop features.

Intuition

My definition of intuition is: A subconscious analytical process based on historical data and personal experiences. In other words, when that little voice in the back of your head tells you something is wrong, it's probably because you subconsciously have analyzed the situation and found a problem. The trick is getting your subconscious to cough-up the details.

The subconscious is a powerful analytical engine. Years ago I worked at a videoconferencing company and learned about lip-sync -- the degree to which the audio (a person's voice) is synchronized with the video (a person's lips). If the audio and video are out of sync, even by small amounts, most people can not identify the problem, but their subconscious reacts; they feel discomfort, stress or nausea (in my case, it was the last). For an example analysis, see Effects of Audio-Video Asynchrony on Viewer's Memory, Evaluation of Content and Detection Ability. Interestingly, people can tolerate when the audio lags the video by up to 45 milliseconds, but they are bothered when the audio leads the video by as little as 10 milliseconds. This is probably because the human brain has learned that sound travels slower than light, so it is normal to see motion first, then hear the associated audio. But it is completely unnatural to hear the audio first, and this causes the brain to rebel. Subconsciously, the brain processes the auditory and visual stimuli, determines what is appropriate and not appropriate, and notifies the rest of the body that something is very, very wrong.

Similarly, a person with significant experience may look at a project plan and feel uncomfortable, stressed or even nauseous, but not be able to identify the problem. When you ask them what's wrong, they might say, "I don't know. It just doesn't feel right." It's tempting to ignore their comments, but I've learned that what they're really saying is, "My subconscious is using my many years of experience to analyze your project plan, and it's finding major issues, but I don't know yet how to verbalize the results of that analysis."

How can you help the person identify the real issue that their subconscious is flagging? Using the audio/video lip-sync analogy, you might cover up all except part of the screen, and ask if this corner of the image is bothersome. If not, repeat the process using another portion of the screen. Eventually when you uncover the lips, and the audio is not perfectly synchronized, the person will immediately feel discomfort. And with their attention focused on the lips, they will consciously recognize the lip-sync problem. The same can be done with a plan. Draw the person's attention to the high-level list of deliverables. Is this the problem? If not, ask about the detailed tasks and deliverables, the staffing, the schedule, the risk remediation and contingency plans, the list of required equipment and space needs. Hopefully, once the person's attention is drawn to a specific area, they will be able to verbalize their specific concerns.

Trust your own intuition, especially if it has a good track record. And trust the intuition of those who have been successful in the past.


Other Plan Analysis Techniques:
Copyright 2007, Robert J. Hueston. All rights reserved.

Wednesday Apr 25, 2007

Plan Analysis: CoCoMo

In my posting Know what you're doing (Part III) I introduced the concept of engineering a plan. One of the key steps in engineering, plans or products, is analysis.

This is the third in the series of analytical techniques for plans: CoCoMo.

CoCoMo

CoCoMo is the Constructive Cost Model, which is an empirical model for software development projects. The model was created by examining many projects, from small to large, simple to complex, using various programming languages. You give it the number of lines of code, and other information about your product, your team, and your development environment, and it tells you how long projects like this normally take. The model is very accurate; quite frankly, eerily accurate.

I was first introduced to CoCoMo in 1987, when I was working in the aerospace industry. Over the ten years that followed, I used CoCoMo as an integral part of all software planning. Even after I left aerospace for commercial product development, I continued to use CoCoMo and evangelize it to others.

Overview

CoCoMo works by, well, quite frankly, I have no idea how it works. It just does. The CoCoMo model was developed by reviewing many projects, from small to large, embedded to interactive, and an equation was developed that best fit the empirical data. Actually several equations were developed -- a simple (basic) version with just a few variables, to a complex (expert) version with dozens of variables.

When I learned how to use CoCoMo, there were worksheets that you'd fill out, then you'd spend a few minutes crunching the numbers and equations. One of the first things I did as a junior engineer was put the equations into a Lotus 123 spreadsheet. [This was back when most engineers had TI-55 III calculators and some had a VT-220 terminal on their desk. Few even had PCs or knew what Lotus 123 was. I wonder how many young engineers today know what Lotus 123 was.] Today, there are online versions including one from The University of Southern California which greatly simplify the task.

The new tools are very simple to use. You start by entering the number of source lines of code, new, reused or modified. Then you answer several questions, to define "attributes". The lines of code plus the attributes constitute the "variables" of the CoCoMo model equations. As a suggestion, leave all of the attributes at "nominal" and review the questions to see if any attributes really should be adjusted up or down; nominal works for most things. The attributes are divided into four categories: project, product, platform and personnel, described below.

Product attributes include how reliable the product needs to be (is it the software that controls the autopilot system for a commercial jetliner; or xeyes), size and complexity of the database, and product complexity (are the algorithms well understood, or cutting edge).

Project attributes cover how the project is executed: the use of engineering tools and development methodologies, extent of distributed collaboration required, and the overall schedule demands.

Platform attributes include execution and memory constraints (is the platform an 8051 with 128 bytes of RAM, or a high-end server with 128 CPUs and a terabyte of RAM?). It also includes platform volatility (is the hardware still in development, or is it a mature product that is already shipping).

Personnel attributes address how capable the engineering team is, their familiarity with the product, the platform, and the language. There is always a tendency to claim your team members are above average, but in reality, most teams are "nominal".

After setting all of the attributes, you click a button, and it gives you a measure of the staff-months to execute the project, as well as a schedule (calendar months) measure. This isn't to say that your project will take this long or cost this much. But it is a measure of what similar projects have cost.

CoCoMo: Historical Examples

As an example, take a small project my team just completed. It took two years to develop the software. The first year I had two people working on it; the second year and a quarter there was just one person available to work on the project. Total cost was approximately 39 staff-months, and in the end there were 9,600 lines of C++ code.

The software ran on an existing OS, and existing CPU, with enough memory and storage. But it was controlling a newly designed hardware system that attached to the computer, so I set the platform volatility attribute to "high". I left all other attributes at nominal -- if I wanted to spend more time, I could probably tweak them, but for a quick demo, I just accepted the default settings. I plugged these values into the CoCoMo tool, and it predicted a cost of 37.9 staff-months -- within 3% of the actual cost. CoCoMo also predicted that the project could have been completed in about a year with a little more than three people. Perhaps, but my schedule was driven more by staff and hardware availability, not time-to-market.

Another project I completed recently had 95,000 lines of code, and took a team of 12 people just under three years to complete and ship. That comes to about 420 staff-months of development. I plugged the 95,000 number into CoCoMo, and since there was nothing Earth shattering about the project I left all the attributes at nominal. CoCoMo came up with 439.8 staff-months. OK, CoCoMo is high by almost 20 staff-months, but keep in mind that's an error of only 4.7%. Pretty good, when you realize that all I gave CoCoMo was one number: the lines of code. Next time, I'll tell CoCoMo that my team is above average in capability.

Using CoCoMo on historical data is a nice confirmation of the model. But a model is only valuable if it can predict the future, and this model is only useful if your variables are accurate. Specifically, you need an accurate measure of the source lines of code that you're going to develop or reuse. But quite frankly, I often find it's easier for many engineers to tell me the amount of code they need to produce rather than the amount of time it's going to take them.

If you can find another project that is similar, it can be fairly easy to come up with a reasonably accurate measure of the lines of code you will need to develop. Consider the last example. I have an old email from back before the project started where someone points out that this project is about half the scope of some other project we finished the previous year. I just checked, and the last project developed 208,000 lines of code. With 104,000 lines of code, CoCoMo predicts 485 staff-months of effort. Using very quick and rough back-of-the-envelop measure predicted the number of lines of code to within 10% of actual, and CoCoMo gave us a measure of the staff costs to within 15% accuracy. Not bad for 10 minutes of analysis. Comparing that to the four weeks we spent at the start of the project listing all of the high-level requirements, decomposing them into tasks and sub-tasks, and creating plans, CoCoMo is much faster and more accurate.

CoCoMo in Plan Analysis

CoCoMo does not develop plans for you. It is a tool for analyzing plan data.

I find the best way to use CoCoMo is to confirm or contradict the detailed planning work that you are doing. After defining your tasks and measuring them, the work adds up to some total cost for the project. You can then use CoCoMo to see if the sum of the tasks is reasonable, as a sanity check. If your detailed plan differs from CoCoMo by more than, say, ten or twenty percent, I would start to worry.

I've also found CoCoMo to be an excellent independent tool for defending project cost. When selling a plan, you can present detailed plans and show how you came up with your costs. Then you can present how CoCoMo confirms your analysis with a similar cost figure. When a person can back up a plan with a well-established model such as CoCoMo, it adds a lot of credibility to the plan.


Other Plan Analysis Techniques:
Copyright 2007, Robert J. Hueston. All rights reserved.

Monday Apr 23, 2007

Plan Analysis: Smart Deliverables

In my posting Know what you're doing (Part III) I introduced the concept of engineering a plan. One of the key steps in engineering, plans or products, is analysis.

This is the second in the series of analytical techniques for plans: Anayzing deliverable definitions.

Deliverables

Analyzing a product (a circuit board, or a software library for example) usually involves identifying the inputs, and how those map to outputs. Analyzing a plan involves the same thing.

In a product design, for example designing a circuit board, you might start with the inputs and outputs of the board. As you decompose the design, you specify the inputs and outputs of functional blocks, the inputs and outputs of chips, and perhaps if you're design includes FPGAs or ASICs, the inputs and outputs of functional blocks within the chip.

Inputs and outputs must by well defined. It would not be acceptable to say a signal should try to do its best to go high when an input goes low. Or a signal should change state "at some point" in the future without providing a tight time limit. It's also fruitless to define output signals that are not needed or used by any other circuit (we might realize that the person designing the chip will probably need such a signal internally, but we don't spend time trying to guess how a chip might work internally). [For software engineers, consider the old K&R method of declaring function names without function prototypes. You had to guess what the inputs and outputs of each function were, the compiler didn't enforce them, and if you got them wrong, you wouldn't know until the code didn't work right. If you were lucky, the program would seg fault; if you weren't lucky, it would misbehave in subtle ways. Tightly specifying function inputs and outputs, including the number of arguments, their types, and whether they were const or variable was a key addition to the C language that made it possible to engineer large-scale applications in C.] We all know that if an input or output is not properly specified, the result will likely be a product that just doesn't work. We all know this, yet when we engineer a plan, a common problem is vague definition of inputs and outputs.

For a project, inputs and outputs are deliverables: what you deliver from your project are your outputs; what you need delivered to your project are your inputs. Deliverables must be specified with the same sort of engineering rigor as electrical signals in a circuit design.

When we say that something is well-defined, we usually mean it is specific, measurable, achievable, relevant, and time-bound: SMART.

Specific

Specificity in engineering is commonplace. But there also tends to be a lack of specificity when engineering a plan, when we describe the outputs of a task, or the outputs of the entire project. What, specifically, will be delivered.

As an example, I've often seen the task deliverable "code complete" in a plan. "Code complete" is not very specific. To one person, it may mean that they have finished typing the code, but they haven't compiled it yet. To another person, it may mean that the code is written, compiles cleanly, has been inspected by peers, and completes a set of unit tests (after all, how do you know it's "complete" unless you've tested it?). Who's interpretation of "code complete" is correct? They both are, because "code complete" is not specific and open to broad interpretation. Who's fault is it that the deliverable does not meet expectations? The project lead's.

Another slightly humorous example I saw recently was a schedule item called "unit testing complete". The owner claimed he was done, and indeed he had run all the unit tests on his code, but half of them failed. The project lead felt "unit testing complete" meant "unit tests run and all tests pass". Or when the project lead thought "requirements complete" meant the requirements document was reviewed and approved, while the other thought it meant the document was written and ready to start being reviewed. As a general rule of thumb, if a deliverable has the world "complete" in the definition, then it probably isn't defined completely.

One easy way to address specifics is to define a set of standing rules. For example:

    Not Specific Specific
    Requirements complete Requirements documented, all issues and TBDs resolved, and reviewed and approved by all applicable parties.
    Design complete Design complete, meets all requirements with no outstanding issues, and has been reviewed and approved.
    Code complete Code written, compiles cleanly with no errors or warnings, meets code style guidelines, has successfully completed code inspection, has completed and passed all unit tests, and has been checked into the source code repository.
    Testing complete All planned tests have been executed, and either all tests have passed, or bug reports have been submitted for all failures.
These are clearly just examples, meant to highlight how you could define "complete" in specific terms. Being specific in a plan is as important as being specific in product design.

When analyzing a deliverable to see if the definition is specific, ask yourself: Would everyone have the exact same understanding of the deliverable? If not, then the definition is not specific.

Measurable

When we talk about something being "measurable," we're really saying that we can prove empirically that it's true. In hardware design, a requirement like "the output should toggle really fast" is not measurable, so we can not judge if an implementation actually meets the requirement. I could just imagine the quality of a product if the hardware "should do its best to meet most of the set-up and hold requirements" or if the software "should be small and execute fast." The product would likely be unusable; the same is true for a plan that lacks measurable deliverables.

It's sometimes hard to separate the discussion of "specific" and "measurable." If you're not specific, then rarely are you measurable. And in the previous section, I tried to provide examples that were both specific and measurable. On the other hand, it is possible to be specific and still not measurable.

For example, there could be a deliverable such as, "All unit tests execute and pass." It is specific in that the unit tests must exist and both execute and pass. But how do you know that the unit tests are sufficient? If I wrote one unit test, executed it, and it passed, am I done? On the other hand, "All unit tests execute and pass and provide 99% statement coverage as measured by gcov" would be specific and measurable -- it tells you what the completion criteria is and how to measure it. [gcov is a tool that measures which statements of code have been executed.] Anyone could inspect the gcov coverage report to see empirically that the deliverable met all its requirements.

When analyzing a deliverable to see if it's measurable, ask yourself: How do I know it is done? If you're having trouble with the answer, then the definition of the deliverable probably isn't measurable.

Achievable

It seems ridiculous to have to point out that deliverables must be achievable. But when I started to think about this, unachievable requirements for deliverables are far more common than I had ever realized.

Take, for example, a requirement like, "The code will be complete and bug free before delivery to QA to start testing." In large, complex systems, it is virtually impossible to be bug free, let alone bug free before testing. So what's the problem with a requirement like this? For one, developers will read it, recognize that it's unachievable, and laugh it off without further consideration. Perhaps the requirement gets changed to, "The code will be complete and mostly bug free..."; however, that's not measurable. Maybe the real intent was, "The code will be complete and all bugs found during unit testing will be fixed or waived by the manager of QA before delivery". This last requirement is achievable, measurable and specific.

Be careful of requirements that have words like "all", "none, "never" or "always" in them. That can be a flag that the requirement is not achievable. Note that in the previous paragraph I had "all bugs... fixed or waived..." You may come across a situation where one bug is not resolved. If the deliverable is defined as "all bugs fixed" then you'll have issues to deal with while executing your plan (you'll be running around trying to invent a waiver process); it's better to establish achievable requirements up front so that execution can go smoothly.

When analyzing a deliverable to see if it's achievable, ask yourself: Am I 100% certain this specific and measurable deliverable can be met? If you are unsure, you may be dealing with an unachievable deliverable..

Relevant

Obviously, deliverable definitions should be relevant -- don't specify the color of the paper that a document should be printed on (especially if it's going to be distributed electronically).

But there's another form of relevance that is often forgotten during planning: Don't specify deliverable outputs that aren't inputs to someone else. Seems obvious, but apparently it isn't because I'm constantly seeing plans that include deliverables (documents, code deliveries, etc) which are not needed by anyone else. In many cases these are "internal signals," things that probably must be done as part of the task working toward the deliverable, but they are not deliverables themselves.

I don't know how many times I've been at project reviews, and a project lead reports that a deliverable, for example the stack usage analysis, is slipping its schedule. The VP will undoubtedly ask, "Who is affected by this delay." And the answer is usually, "Well, no one. It's just used internal to the team." If no one is affected if a deliverable is delayed, then probably the deliverable is not relevant.

Of course, relevance may have many layers. From a "product" point of view, an internal deliverable may not be relevant. Within the project team, if one team member does not deliver X (an internal deliverable) to another team member, then the schedule for product deliverable Y could be at risk. When working within the team, X is relevant; when discussing the project with external people, then Y is relevant.

When analyzing a deliverable to see if it's relevant, ask yourself: Who would care if this deliverable is delayed or canceled? If the answer is "no one," then it's probably not relevant.

Time-Bound

Time-bound means there's a due date, a time when the deliverable must be available. It's pretty easy to tell if a deliverable has a time boundary; but it may take some analysis to tell if the time boundary is a good one.

When we put together a schedule we usually have a date when something should be done. But a plan is not an estimate of when you might be done, it's a promise of when you will be done. [That sentence has become sort of a mantra with me and my teams. I can now say the first half, and almost anyone who's worked with me will finish it.] A gantt chart is not a plan; a schedule is not a plan. A plan is a promise, a contract.

If your best engineers got together and thought they would be done with a product by January 1st, your schedule might say January 1st. But if your boss (manager, marketing, venture capitalist) asked, what is your "drop dead" date, the date you promise you will be done and if you miss that date you're fired? You might not pick January 1st. You'd pick a date that you were 95% confident that your team would be done. Maybe February 15th.

A plan should document the dates you promise deliverables. It's fine to say you might be done January 1st, but you're willing to promise February 15th. You'd continue to work your team toward an early finish date of January 1st, but when you report on your deliverables outside the team, you'd report on your confidence of hitting February 15th. And if you finish January 1st, or January 31st, or February 14th, people may be pleasantly suprised, and you will have met your promise.

When analyzing a deliverable definition to see if it's time-bound, ask yourself: Do I know exactly when it's due, and can I promise to meet that date? If you're not sure of the answer, then the time boundary may only be an estimate, and you need to make it a promise.

Summary

When analyzing the deliverables from a project, ask yourself if they are SMART. Ask yourself:
  • Specific: Would everyone have the exact same understanding of the deliverable?
  • Measurable: How do I know it is done?
  • Achievable: Am I 100% certain this specific and measurable deliverable can be met?
  • Relevant: Who would care if this deliverable is delayed or canceled?
  • Time-bound: Do I know exactly when it's due, and can I promise to meet that date?
Some of this may seem pedantic. But the results of your analysis will yield better defined deliverables, and fewer surprises in the long run.
Other Plan Analysis Techniques:
Copyright 2007, Robert J. Hueston. All rights reserved.

Friday Apr 20, 2007

Plan Analysis: Risks and Dependencies

In my posting Know what you're doing (Part III) I introduced the concept of engineering plans. One of the key steps in engineering, plans or products, is analysis.

Back in college, I took many analysis courses related to my major. Circuit Analysis I and II, Engineering Analysis, Statistical Analysis, and the content of many other courses stressed analysis. A good engineering education must include a good foundation in analysis. Engineering a plan is no different. I wanted to present a few analytical techniques for planning.

The first in the series is risk and dependency analysis...

Risk Analysis

Three common sections in a project plan are: Assumptions, Risks, and Dependencies. I hate assumptions; all assumptions are risks, you're just not planning on dealing with them. If it were up to me, the word "assume" would be banned from project plans. Dependencies are similar. If a dependency has already been satisfied, then it simply "is". If a dependency has not already been satisfied, then there's a risk that it won't be satisfied. You don't manage dependencies; you manage the risk that dependencies will not be met in a timely fashion.

One way I like to analyze risks and dependencies is a simple table, with columns for:

  • Risk: A description of the risk or dependency, in just enough detail so I remember what I was afraid of. Some people are fanatical that it must be worded as a risk (for example, "Hardware schedule" is not a risk, but "The hardware schedule might slip" is a risk.). I'm not fanatical about anything; whatever works for you works.
  • Likelihood: How likely it is that the risk will evolve into a real problem. The likelihood may change over time; something that is unlikely to be a problem at the start of a project may become very likely when the due date is approaching and the risk has not yet been avoided. I like to simply rank the likelihood. You can use any rating system (a scale of 0 to 100, for example), but I prefer the simple high, medium, low ranking.
  • Impact: What is the impact if the risk becomes a real problem. Again, any rating system can be used, such as high, medium and low. Impact is a bit subjective, but it should address the impact to the overall product. For example, a high impact risk is one that could cause the entire product to be canceled or significantly delayed. A low impact might mean increased cost or a small impact to product schedule.
  • Remediation Plan: This is what I'm going to do to ensure that the risk does not become a problem. For dependencies, this might include communicating with the supplier early and often, tracking interim milestones, etc. For technical risks it might mean doing early prototype work, or adding subject-matter experts to the team.
  • Contingency Plan: This is what I'm going to do in case the risk evolves into a problem, that is, in case my remediation plan has failed.
I believe separating likelihood and impact is essential. Too often we concentrate on risks that are very likely, but have low impact. What if Joe misses his deadline by a day: highly likely perhaps, but if it's only a day, then it may be low impact. Some people tend to worry too much about risks with high impact but low likelihood. I've actually seen people list dependencies that have already been delivered, just because it would have been really bad had they not already been delivered. Forcing myself to identify if the risk is highly likely or highly impactful helps me concentrate on the risks that will most likely cause the most problems.

Below is an example of a portion of a risk analysis table.

    Risk Likelihood Impact Remediation Plan Contingency Plan
    Delays in the hardware schedule may delay prototype availability, and impact boot-code testing. Medium High Attend the monthly hardware status review so that we have early notice if the hardware schedule is slipping. Spend extra time up-front to improve the simulation environment so that we can continue development even if hardware is delayed.
    If likelihood increases to "high" before Dec 1, order additional systems for the lab so we can reduce integration time by doing more testing in parallel.
    Company XYZ must deliver a driver for their network card to support first power-on and boot. High Medium Contacted XYZ and informed them of our technical and schedule needs. Working with Legal department to get legal agreements in place. Joe in Supplier Management will contact XYZ monthly until the driver is delivered. Although the ABC network card will not be used in the product we already have the driver and legal agreements in place.
    If we don't have the XYZ driver by Dec 15, will purchase a dozen ABC network cards for power-on testing.
    If we don't have the XYZ driver by Feb 15, will be unable to start performance testing and the product release will be delayed.
    Plan depends on buying libraries from DEF. Low Medium Purchase order is already written. Management has indicated that they will approve it. If management does not approve the purchase order by May 5th, will need to assign 3 engineers to start work on a proprietary set of libraries. This will delay project completion by six months unless additional staffing is added.

To create the above table, I have a simple CGI script (written in PERL) which allows me to edit the various fields using my web browser, and allows others (managers, my team members, and other teams) to view my risks whenever they want. I've used this successfully on several projects. [Maybe some day when I write my book, I'll include a CD with all the CGI scripts I use to lead projects. :-) ]

Colors? Where did the colors come from? I've found that colorizing risks has two benefits: (1) It draws your eye to the things you should worry about the most, and (2) Managers often lack the time or attention span (and sometimes the ability) to read long sentences, so they either need cute graphics or colors. And since I'm not good enough at CGI to produce tachometer gages, traffic light graphics, or pie charts, I just colorize the rows. For my own purposes, I assume a likelihood or impact of "high" is worth 3 points, "medium" is 2 and "low" is 1. Multiplying the two together yields the overall risk: 9 is critical (red), 2 or less is under control (green) and everything in between is a serious risk (yellow).

There's a fourth color: blue. I'll set the likelihood to "done" to show that the dependency has been met, or the impact to "none" if the risk has passed. "Done" and "none" have a rating of 0, so if either is 0, the risk becomes 0, so the item is closed and the row is colored blue. I might mark a risk closed and leave it in the table for a few weeks before finally deleting it.

Early in the planning phase, you may come across a lot of risks, such as the risk that development will take longer, or emergent tasks will arise. But as you do analysis, you should start planning for problems and a reasonable number of emergent tasks. Once you plan for problems, then it's not a risk that those problems will arise; it's the plan. In effect, the impact drops to "none" since the plan already accomodates these problems. When you're done with the planning phase, there should (hopefully) be few true risks that your plan does not already fully address.

When a good process becomes a bad methodology

I found this approach to be very useful, as did others. One day someone decided to establish a formal process for creating and using the risk analysis table. Instead of CGI and a web page, they created a spreadsheet.

In addition to likelihood and impact, they added "visibility" (your ability to observer the state of the risk; presumable risks that are hard to monitor warrant closer scrutiny). With three factors, all rating between 0 and 5, there were now 125 different "states" a risk could be in, so an appropriate number of colors were added to the rows -- chartreuse, fuchsia, and a few colors I didn't even know existed (and I'm not even sure they had names). The spreadsheet also included columns for things like who owned the external dependency, what was their promised date, whether they had agreed to your need date, when you talked to them last, and when the row had been last updated (just to make sure you were checking and updating your risks regularly).

The spreadsheet ended up with so many columns, it was impossible to view them all at the same time, even on a 21-inch monitor. Since this was a spreadsheet and not a web page, it became more difficult to share it. I was told: post the spreadsheet on a web page, and people can download it as a file and open it. (I've found that most want information immediately, and if they have to download a file, their patience is exhausted and they don't bother.)

Soon, a team of people was responsible for making sure that every project leader had a risk and dependency spreadsheet. The "Spreadsheet Police" would check periodically to make sure you were updating your spreadsheet regularly. At quarterly program reviews with the engineering vice president, we were required to display and the spreadsheet (shrunk down to an unreadable 6 point font and projected onto a screen) and discuss it with the VP.

A simple, informal process had become worse than a formal process; it had become a methodology. A bad methodology.

Project leaders hated the process. It didn't help them manage risks and dependencies, and only wasted their time updating useless information. Managers and VPs were frustrated because the display was too small to read and the content too detailed to absorb at their level of interest. Eventually, the entire process was scrapped.

The next day, I spun up my CGI script, and I was back to using my old web page for tracking risks and dependencies, and I've been using it ever since.

The moral of the story is simple: Follow the processes that help you, in a way that helps you the most. And if you do find a process that works well for you, don't tell anyone, or they'll turn it into a methodology!


Copyright 2007, Robert J. Hueston. All rights reserved.
About

Bob Hueston

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today