Plan Analysis: Risks and Dependencies
By Bob Hueston on Apr 20, 2007
Back in college, I took many analysis courses related to my major. Circuit Analysis I and II, Engineering Analysis, Statistical Analysis, and the content of many other courses stressed analysis. A good engineering education must include a good foundation in analysis. Engineering a plan is no different. I wanted to present a few analytical techniques for planning.
The first in the series is risk and dependency analysis...
Risk AnalysisThree common sections in a project plan are: Assumptions, Risks, and Dependencies. I hate assumptions; all assumptions are risks, you're just not planning on dealing with them. If it were up to me, the word "assume" would be banned from project plans. Dependencies are similar. If a dependency has already been satisfied, then it simply "is". If a dependency has not already been satisfied, then there's a risk that it won't be satisfied. You don't manage dependencies; you manage the risk that dependencies will not be met in a timely fashion.
One way I like to analyze risks and dependencies is a simple table, with columns for:
- Risk: A description of the risk or dependency, in just enough detail so I remember what I was afraid of. Some people are fanatical that it must be worded as a risk (for example, "Hardware schedule" is not a risk, but "The hardware schedule might slip" is a risk.). I'm not fanatical about anything; whatever works for you works.
- Likelihood: How likely it is that the risk will evolve into a real problem. The likelihood may change over time; something that is unlikely to be a problem at the start of a project may become very likely when the due date is approaching and the risk has not yet been avoided. I like to simply rank the likelihood. You can use any rating system (a scale of 0 to 100, for example), but I prefer the simple high, medium, low ranking.
- Impact: What is the impact if the risk becomes a real problem. Again, any rating system can be used, such as high, medium and low. Impact is a bit subjective, but it should address the impact to the overall product. For example, a high impact risk is one that could cause the entire product to be canceled or significantly delayed. A low impact might mean increased cost or a small impact to product schedule.
- Remediation Plan: This is what I'm going to do to ensure that the risk does not become a problem. For dependencies, this might include communicating with the supplier early and often, tracking interim milestones, etc. For technical risks it might mean doing early prototype work, or adding subject-matter experts to the team.
- Contingency Plan: This is what I'm going to do in case the risk evolves into a problem, that is, in case my remediation plan has failed.
Below is an example of a portion of a risk analysis table.
|Risk||Likelihood||Impact||Remediation Plan||Contingency Plan|
|Delays in the hardware schedule may delay prototype availability, and impact boot-code testing.||Medium||High||Attend the monthly hardware status review so that we have early notice if the hardware schedule is slipping.||Spend extra time up-front to improve the simulation environment so that
we can continue development even if hardware is delayed.
If likelihood increases to "high" before Dec 1, order additional systems for the lab so we can reduce integration time by doing more testing in parallel.
|Company XYZ must deliver a driver for their network card to support first power-on and boot.||High||Medium||Contacted XYZ and informed them of our technical and schedule needs. Working with Legal department to get legal agreements in place. Joe in Supplier Management will contact XYZ monthly until the driver is delivered.||Although the ABC network card will not be used in the product
we already have the driver and legal agreements in place.
If we don't have the XYZ driver by Dec 15, will purchase a dozen ABC network cards for power-on testing.
If we don't have the XYZ driver by Feb 15, will be unable to start performance testing and the product release will be delayed.
|Plan depends on buying libraries from DEF.||Low||Medium||Purchase order is already written. Management has indicated that they will approve it.||If management does not approve the purchase order by May 5th, will need to assign 3 engineers to start work on a proprietary set of libraries. This will delay project completion by six months unless additional staffing is added.|
To create the above table, I have a simple CGI script (written in PERL) which allows me to edit the various fields using my web browser, and allows others (managers, my team members, and other teams) to view my risks whenever they want. I've used this successfully on several projects. [Maybe some day when I write my book, I'll include a CD with all the CGI scripts I use to lead projects. :-) ]
Colors? Where did the colors come from? I've found that colorizing risks has two benefits: (1) It draws your eye to the things you should worry about the most, and (2) Managers often lack the time or attention span (and sometimes the ability) to read long sentences, so they either need cute graphics or colors. And since I'm not good enough at CGI to produce tachometer gages, traffic light graphics, or pie charts, I just colorize the rows. For my own purposes, I assume a likelihood or impact of "high" is worth 3 points, "medium" is 2 and "low" is 1. Multiplying the two together yields the overall risk: 9 is critical (red), 2 or less is under control (green) and everything in between is a serious risk (yellow).
There's a fourth color: blue. I'll set the likelihood to "done" to show that the dependency has been met, or the impact to "none" if the risk has passed. "Done" and "none" have a rating of 0, so if either is 0, the risk becomes 0, so the item is closed and the row is colored blue. I might mark a risk closed and leave it in the table for a few weeks before finally deleting it.
Early in the planning phase, you may come across a lot of risks, such as the risk that development will take longer, or emergent tasks will arise. But as you do analysis, you should start planning for problems and a reasonable number of emergent tasks. Once you plan for problems, then it's not a risk that those problems will arise; it's the plan. In effect, the impact drops to "none" since the plan already accomodates these problems. When you're done with the planning phase, there should (hopefully) be few true risks that your plan does not already fully address.
When a good process becomes a bad methodology
I found this approach to be very useful, as did others. One day someone decided to establish a formal process for creating and using the risk analysis table. Instead of CGI and a web page, they created a spreadsheet.
In addition to likelihood and impact, they added "visibility" (your ability to observer the state of the risk; presumable risks that are hard to monitor warrant closer scrutiny). With three factors, all rating between 0 and 5, there were now 125 different "states" a risk could be in, so an appropriate number of colors were added to the rows -- chartreuse, fuchsia, and a few colors I didn't even know existed (and I'm not even sure they had names). The spreadsheet also included columns for things like who owned the external dependency, what was their promised date, whether they had agreed to your need date, when you talked to them last, and when the row had been last updated (just to make sure you were checking and updating your risks regularly).
The spreadsheet ended up with so many columns, it was impossible to view them all at the same time, even on a 21-inch monitor. Since this was a spreadsheet and not a web page, it became more difficult to share it. I was told: post the spreadsheet on a web page, and people can download it as a file and open it. (I've found that most want information immediately, and if they have to download a file, their patience is exhausted and they don't bother.)
Soon, a team of people was responsible for making sure that every project leader had a risk and dependency spreadsheet. The "Spreadsheet Police" would check periodically to make sure you were updating your spreadsheet regularly. At quarterly program reviews with the engineering vice president, we were required to display and the spreadsheet (shrunk down to an unreadable 6 point font and projected onto a screen) and discuss it with the VP.
A simple, informal process had become worse than a formal process; it had become a methodology. A bad methodology.
Project leaders hated the process. It didn't help them manage risks and dependencies, and only wasted their time updating useless information. Managers and VPs were frustrated because the display was too small to read and the content too detailed to absorb at their level of interest. Eventually, the entire process was scrapped.
The next day, I spun up my CGI script, and I was back to using my old web page for tracking risks and dependencies, and I've been using it ever since.
The moral of the story is simple: Follow the processes that help you, in a way that helps you the most. And if you do find a process that works well for you, don't tell anyone, or they'll turn it into a methodology!