Monday Jan 29, 2007

Old dogs and new tricks

The saying goes, "You can't teach an old dog new tricks." And while there is some truth in it, it is also probably one of the most misunderstood maxims.

Note that the subject of the maxim is not the dog; the subject is you. "You can't teach..." The saying is not, "Old dogs cannot learn new tricks." The problem isn't with the old dog; it's with you.

Old dogs are perfectly capable of learning. A dog learns tricks when it is in the best interest of the dog to learn the trick, when it believes it needs to learn a new trick in order to earn the respect of the owner, when it needs to survive. But after the owner and dog have lived together for years, they know each other. The dog knows that the owner loves and respects it; the dog knows it does not have to learn any new tricks to win approval. Dogs are actually very smart, and lazy. It knows if it just continues to do what it's always been doing, it will still get fed tomorrow.

If you bring in a new owner, on the other hand, the dog is no longer complacent in its current situation. It doesn't know if the new owner will accept the dog unless it learns new tricks. The dog is therefore motivated to learn in order to ensure a secure and prosperous future. I recently adopted a seven-year-old dog. The previous owner warned that the dog begged at the table and there was no way to discourage it. So she continued to feed it table scraps while she ate. But once in our home, after a few days of making it clear that begging at our table was not allowed, the dog stopped. It now sits just outside the kitchen door while we eat and waits, knowing that it will be fed as soon as we're all done. It quickly learned the trick: If it waited patiently it would get fed.

Most people (and almost all engineers) are smart and lazy, like dogs (the rest of the people are just dump and lazy). If a leader tries to teach a team of engineers a new trick, it can be extremely difficult. I once had a manager who never required status reports. Then one day, he announced that all engineers would have to submit written status reports by 5pm every Friday. We all listened carefully, and laughed on the inside. The first week, everyone submitted status reports. The second week, about half the people submitted status reports. I think only one person filed a report on the third week (and no, it wasn't me). We all knew that the manager was not about to fire anyone for failing to submit a status report. And he didn't. He mentioned the status reports for a couple more weeks, then the matter simply died quietly. We all knew it would.

Sun itself is another example. A couple of years ago, the dot-com bubble had burst and Sun was losing money left and right. While Scott McNealy is a terrific leader who grew Sun to be a key player in the Unix server market, it took a new leader to teach this old dog new tricks.

If a team is going to learn new tricks, that is, adopt significant new processes, first the leader must change. The change may be physical (replace the person who is the leader) or metaphysical (the leader changes himself). It can be extremely difficult, sometimes impossible, for a person to change himself, hence the saying: You can't teach an old dog new tricks.

Wednesday Jan 24, 2007

When Good Enough Is Good Enough

I have a broken, shattered vase on my desk. It's an eye-sore. And a lesson.

Years ago, I had a small ash tree growing up through a fence in my yard. It was one of the few trees in my yard, and I was reluctant to cut it down, but I realized either the tree or the fence had to go. I wanted to preserve a bit of my tree by turning (on a lathe) one section into a vase that could sit on my mantle.

Turning "Green wood" (newly cut wood that is still green and wet inside) is different from store-bought kiln-dried wood. The wood is turned on the lathe and shaped with chisels and gouges as usual. But after being shaped, the wood continues to dry, and in the process, it twists and contorts into unusual shapes. But of course you have no idea what your work will look like until months later when the wood has fully dried. Instead of a vase that looks machine-made, it often looks like it has melted slightly, taking on a Dali-esque appearance.

I saved a nine-inch section of my tree, and turned it into a very nice vase. My wife thought it looked beautiful and put it on the mantle (and believe me, she doesn't allow just anything on her mantle). But from my unique vantage point in my chair, when the sun shone through the skylight at just the right angle, I could see a flaw. There was a spot where my gouge must have dug in a little too deep and left a tiny, almost undetectable groove near the foot of the vase. It drove me crazy.

After a few days on the mantle I could take no more. I took the vase back out to my shop, and chucked it up on the lathe again. But when I turned on the lathe, I realized even just a few days on the mantle had caused the vase to dry and change shape. It was no longer uniformly round, and as a result, it was no longer well balanced on the lathe. As the vase began to spin, it started to wobble, and the lathe began to vibrate. The vase flew off the lathe, whacked my face shield pretty hard, then landed on the cement floor and shattered.

I turned a piece of a lovely shade tree into a beautiful vase. But I wasn't happy with a beautiful vase; I wanted a perfect vase, and my own desire for perfection left me with a shattered piece of trash. I keep that vase on my desk to remind myself to be satisfied with beauty, because when you change something, one possible outcome is disaster.

The vase is, of course, a metaphor for my work in software engineering. Often, software engineers fall in love with their code, and although they have produced a beautiful product -- a product that meets all the requirements -- they still like to tweak and optimize it. I'm guilty in this area myself. While investigating an unrelated bug report, I saw an awkward piece of code I had written months earlier. As part of my bug fix, I also rewrote this section of code to be more elegant, more streamlined, more perfect. In the process, I introduced a bug at a boundary condition. Whenever a software engineer touches a piece of working code, there is always a non-zero probability that a new defect will be introduced. Sometimes we have to accept that good enough is good enough.

Tuesday Jan 23, 2007

Five things you don't care about me

I hate this meme, but I've been tagged by Josh Simons, so here goes... Five things you don't know about me (and probably don't care):
  1. In high school, I was an athlete. Those who see my 200+ pound girth today probably can't imagine the 6'1", 140 pound, hurdler of my youth. Even today, nearly 30 years later, my name is still on the track and field record board at my high school. Luckily they don't make you come back every few years to defend your records :-)
  2. I enjoy woodworking, and have done construction, furniture, turning (vases, bowls and lamps), and clocks. A couple of my projects (my roll-top desk and old pine cupboard) are listed on The New Yankee Workshop web page (although they managed to put the desk photo with the cupboard description, and vice versa).
  3. In high school I seriously considered a career in writing. Then I learned that a B-average engineer with a bachelors would make twice the salaray that an A-plus writer with a masters degree makes, and my future was decided: I'd be an engineer by day and a blogger by night!
  4. My Mom died last year, after a very hard, two-year struggle with cancer. Very few people I work with even knew she was sick. Very few, as in two or three; I consider it a personal matter. A co-worker and friend some years ago complained how hard it was to maintain a personal life at work. I told her that your personal life is that part of your life you choose to keep personal. If you share everything about your home life at work, then in effect you choose to have no personal life. I, on the other hand, choose to have a personal life, and keep much of my life personal.
  5. I have the smartest, most talented, and most beautiful three-year-old girl that the world has ever seen. Funny how no one knows that about me, especially the other parents of three-year-olds. :-)

Thursday Jan 18, 2007

Be Careful What You Measure, It Might Improve

There's a saying: That which is measured improves; that which is ignored degrades. But be careful what you measure, because it just might improve.

Collecting and analyzing metrics is not something to be undertaken without the proper preparation. When you apply metrics to any activity, the metric may "improve" at the expense of other activities; the net effect might be negative.

A long time ago, I was on a large software team developing a new telecommunications product. The project was supposed to be to port an existing code base to a new hardware platform and OS. The differences in the hardware and OS changes were greatly underestimated, and the result was a huge number of bugs.

Management was concerned that (A) product quality was very low, with a high bug count, (B) testing was finding new bugs too slowly, and (C) development was fixing bugs too slowly. So they decided to employ metrics. They decided to measure:

  • The number of bug reports filed per tester.
  • The number of bugs fixed per developer.
The metrics were tracked weekly, and a sorted list showing how each employee was doing against metrics was distributed to everyone. When the product shipped, bonuses would be tied to individual performance against metrics.

At first glance, this seems like a reasonable approach -- it encouraged testers to find lots of bugs, and developers to fix lots of bugs, and it also encouraged them to do it quickly so the product would ship and bonuses would be paid. The result, however, was a disaster.

Testers began filing bug reports by the truckload. Every combination and permutation of conditions resulted in a new bug report: The wrong ring tone is used when connected to a BRI interface. The wrong ring tone is used when connected to a Tri-BRI interface. The wrong ring tone is used when connected to a PRI interface. The ring tone had nothing to do with the network interface; it was one software bug, but it was reported as three separate bug reports, thus boosting the individual tester's metrics.

Developers were no less shameful. They would seek out the bugs that were quick-and-easy to fix in order to drive up their metrics. Critical, high-priority bugs that would take a long time to diagnose and debug were avoided like The Plague. Also, the quality of code changes dropped, not just because of haste, but if a developer fixed one bug and introduced two others, then the tester would get to file two more bug reports, and the developer could fix two more bugs, driving up everyone's metrics. I honestly believe no one was intentionally introducing new bugs, but the metrics discouraged developers from testing for regression.

In the end, the metrics improved -- the rate of bug reports being filed went up, and the rate of bug fixes increased. But the goal was not achieved; overall, the quality of the product actually decreased. After a few weeks, management realized what was happening. They decided to measure the total number of open bugs, instead of maintaining metrics on a per-person basis.

When designing a system of metrics (and it is a design process), one has to consider the goal, and identify the measurable criteria directly related to that goal. In this example, the goal was to improve product quality quickly. Unfortunately, the selected metrics were only second-order criteria and were not directly related to the goal. The metrics improved while the goal slipped away. In other words, what you measure will improve, so be careful what you measure.


Copyright 2007, Robert J. Hueston. All rights reserved.

Wednesday Jan 17, 2007

Anticipate the Unanticipated

The key to quality project planning is planning for the worst, and the worst of the worst is the unexpected. How can one anticipate the unanticipated?

Software engineering as a discipline has existed for maybe 20 or 30 years. Even electrical engineering is a young 130 years old. By comparison, man has been making war for over 12,000 years, and the science of military planning has a rich history that we can learn from. War, being little more than organized chaos, mandates planning for the unexpected.

Battlefield Reserves

Military planning anticipates the unanticipated using reserves. When a division (approximately 10,000 soldiers organized into three ground brigades) enters the battlefield, it will normally advance with two brigades, and hold one brigade in the rear, just a couple of hours behind the main action, as a tactical reserve. The reserve brigade is available to exploit an unexpected vulnerability in the enemy's line, or to support a forward brigade if it meets unexpected resistance. Similarly, when a brigade is exhausted from battle, the reserve brigade can move forward and take up the offensive.

On a larger scale, an army corps may engage the enemy with three or four forward divisions, and hold one division in the rear as a strategic reserve, ready to move when a forward division needs support. Moving a division of 10,000 men is no easy task, and it may take days to bring the reserve division fully into the battle. As a result, strategic reserve is no substitute for well planned tactical reserve at the division level.

This same approach can be employed effectively in software engineering projects.

Tactical Reserve

In an engineering project, tactical reserve takes the form of staffing reserve, feature reserve, and schedule reserve.

Staffing reserve is when in the planning phase you load your engineers below 100%. I generally figure an engineer will spend 20% of his time doing non-development activities: attending meetings, doing overhead tasks, sick time and vacation (note that 12 paid holidays and three weeks of vacation is already 10% lost time). Even so, I try to load my engineers at 70%: 20% for overhead plus an extra 10% in reserve. There is additional staffing reserve -- the other 128 hours a week that the engineer is not normally scheduled to work (OK, all 128 hours are not available, but for short periods of crisis, an engineer who normally works 40 hours a week can easily provide an addition 50% to 100%).

At the start of a project, developers may find they can spend 80% or more of their time on planned tasks, and so they get ahead of schedule. Then a problem arises, and the project leader needs to pull one engineer off his tasks to address the problem. Because the engineer was already on the project, there is very little ramp-up time, and the engineer is able to make good progress on the problem very quickly, while making little progress on his planned tasks. But since he was already slightly ahead of schedule, the overall impact is minimal, and if necessary, a little overtime might get the entire project back on schedule.

    [I know several engineers that normally put in 60 to 80 hours a week. Some people like them on their team because they give more than 100%. I don't, because when I need everyone to give an extra 50%, they have nothing in reserve.]
Feature reserve comes from triaging features at the start of a project. I like to categorize features as must-have, nice-to-have and non-features. I try to be up-front that my project will deliver all of the must-have features, and may deliver some of the nice-to-have features. Planning would support the implementation of all nice-to-have features. But when problems arise (and they will), and we run out of staffing reserve, I can start to drop nice-to-have features to protect the schedule. If things go fairly smoothly, we may deliver a product with lots of nice-to-have features; if things go very badly, we might have a bare-bones product, but we will have a product.

Schedule reserve is the project's ability to slip schedule without slipping the release date. That might sound contradictory, but it isn't. As an example, consider a new software feature going into Solaris to support new hardware which will be available in June. The base plan may be to release the feature in a Solaris Update release due out in June. But the testing and release process for a Solaris Update is on the order of three months, so the software would need to be complete by March. On the other hand, the feature could be released as a patch to Solaris; patches cost more to release, but only take a few weeks to test and release, and can be further expedited (at additional cost) to release in a matter of days. The base plan would be to complete the software by March and release with a Solaris Update, but the schedule reserve allows completion to slip until May, or even June, and still release on time (albeit, with added cost).

    [Note: I have no idea when the next Solaris Update release is coming out. The June date is only used as an example.]

Strategic Reserve

Strategic reserve is normally factored into the strategic planning. At a business unit level, the project portfolio may include must-have and nice-to-have projects; and the project schedules may include schedule reserve (for example, we want to release a new feature a year before company XYZ, but it's OK if that lead slips to six months).

When one project runs into problems, management can take people from a different project (one with lower priority, or one with more schedule reserve) and apply them to the problem area. Of course, moving a person from one project to another does require a ramp-up time; the person moving may spend days reading project requirements and design documents, and figuring out where they fit into the project.

Due to the cost of moving strategic reserves, it is imperative that every project have some degree of tactical reserve.

I've seen both sides of strategic reserves in motion. I've been on projects where things go so wrong that more staff is needed, sometimes causing another project to implode. I've also been on projects where management reassigned one or more key engineers to another project. That can be an emotionally frustrating position, but one has to keep in mind that these dicisions are being made at a strategic level, and management must do what is best for the business, even if it means dismantling your project.

Outro

An engineer leading his first project once said he wished he could be more like me and not worry so much. That's humorous, because I'm the biggest worrywart around. But I channel my anxiety-induced energy into planning, and in a solid plan I find comfort. I mostly worry about the unexpected, so once I plan for the unexpected, I know I have nothing to worry about.


Recommended reading:
  • Clancy, Tom and Franks Jr., General Fred, Into the Storm, New York, 1987.
  • Keegan, John, A History of Warfare, New York, 1983.

Copyright 2007, Robert J. Hueston. All rights reserved

Tuesday Jan 16, 2007

Everything I needed to know about project planning I learned from my little league coach

When I was kid, I loved baseball. I was good at batting, but I hated being in the field. I could catch the ball as well as anyone, but once I had the ball, I hesitated, especially when there were already men on base. Should I throw to second and hold the base runners? Should I throw to third and try to get the runner out? And while I hesitated, the runners kept running and the problem kept changing. In the end, I would often compromise and throw the ball half way between second and third -- no man's land -- and the runners would just keep running.

My failing was in my fielding strategy: Whenever the batter approached the plate, I would close my eyes and pray that they bunted. If the ball was never hit to left field, I wouldn't have a problem. Sadly, this strategy rarely worked.

After years of sub-par performance in the field, my coach pulled me aside and gave me some advice that has stuck with me through today: Before the pitch is thrown, decide what you're going to do if the ball is hit to you. What will you do if you catch a fly? What will you do if it reaches you on a hop? What will you do if it's hit over your head and rolls to the fence?

This was brilliant advice! While I would still stick with my main strategy -- pray for a bunt -- I could add a new dimension: thinking ahead, and baseball gives you lots of time to think. While the batter was picking out his bat, tapping the dirt out of his cleats, and adjusting his, uh, stance, instead of counting the number of dandelions in left field, I could be thinking about what I should do if the batter did not bunt. Why didn't my coach tell me this years earlier!

Fast forward 30 years...

Today, I often see software project plans that assume the project will be fully staffed on day-one, the hardware will arrive exactly on time with no serious bugs, all external dependencies will be met with perfect quality, and no one on the team will ever get sick, quit, or take a vacation day. The project leader lists those assumptions in the project plan, saying, "If all of the assumptions are met, this project will be successful." The assumptions are really risks that they're not planning to deal with and they're just hoping will go away. Basically, they're planning to fail; praying for a bunt.

It's easy to plan for the best. The key to quality project planning is planning for the worst, contingency planning, planning for the case where the ball gets hit to you, or over your head. And maybe you don't want the ball, but sooner or later it will be hit your way. I've never worked on a project where everything went as hoped, so good planning is all about dealing with things that could go wrong.

Just like in baseball, the absolute worst time to deal with a crisis is during the crisis. There's just too much pressure when the ball gets hit to you to decide where to throw it. And as you stand there, trying to decide what to do, the problem is changing, time is passing, and things are continuously getting worse. If you try to make the decision in the heat of crisis, you're likely to throw the ball into no man's land. Instead, you need to plan for crisis before the crisis happens.

Even before starting the project, ask yourself, for example, what could you do if the hardware gets delayed? Can you continue to develop the software using a simulator or using a different hardware platform? Are there tasks you could move around to minimize schedule impact? Are there features that can be dropped in order to shorten the back-end schedule? In the end, it may be impossible to deliver the software less than X months after the hardware is available, and even that information is critical in the project planning phase.

Every assumption is a risk, and every risk should have a contingency plan. It's even possible to plan for the unexpected (that will be blog entry for another day). And when you're in the middle of the project, and a crisis breaks out, you can calmly and confidently tell people that everything is going according to plan. Then just do what your plan said you would do.

Tuesday Jan 09, 2007

Breaking The Fourth Wall

The term "fourth wall" is used in performance art to describe the invisible barrier between the actors and the audience. On occasion, a performance will "break the fourth wall" by having an actor address the audience directly. While there are many examples in classical plays, my own vivid memory of one example is of the Burns & Allen Show on TV, when George Burns would turn directly to the camera and talk to the audience, one-on-one, about what was transpiring on the show. There actually was no audience on the set, just a camera, but Burns broke the fourth wall just by acknowledging that an audience eventually would be watching the show.

As product development engineers, we often hide behind the fourth wall that separates us from our customers, the end-users of our product. We work on products, oblivious to the fact that they eventually will be interacting with our customers. We sometimes design products for ourselves instead of our customers, or ignore annoying little "features" that might drive customers crazy hoping maybe no one would notice, or design individual products without much consideration how a customer will assemble them into a system. In effect, we act on a soundstage in front of a camera, oblivious to the audience behind the fourth wall that will eventually view our performance.

Last year I had a rare opportunity to meet with a large customer. I got to break the fourth wall. This customer owned hundreds of our products, from desktop machines up to multi-million dollar high-end servers. Along with a couple of senior engineers from the other product families, we were to present our low-end, mid-range and high-end product roadmaps. While I went into the meeting with excitement and pride, I crawled out of there feeling like a wounded puppy. And I would go back in a second.

In the meeting, the first thing the customer mentioned was connectors. Connectors? We wanted to talk about processor architectures, memory capacity, and IO options. Who cares about connectors? Well, the customer cared. On our current high-end products, the serial console connector is an DIN connector, similar to most keyboard or mouse connectors. They found that if the cable is bumped or tugged, the DIN connector could fall out, requiring physical access to the datacenter to re-insert the cable. And very few individuals were allowed physical access to the datacenter. "Why couldn't you use a captive connector, like an RJ11 or DB9?" they asked. I had no answer.

The next issue they raised was the command line interface for managing our products. Each product family had different commands. And when two product families happened to have the same command, the options and arguments were often different. "We have to train our employees on three different command sets," they explained, showing a "cheat sheet" they issued to all their system administrators showing how to do the most common tasks on each product line. "Why can't you have one set of commands for all your products?" Again, I had no answer. We had created a great command line interface for our product line, with exactly the features and options our product needed. The engineers working on the other product lines did the same thing for their products.

The honest answer to both of the questions is simple: We failed to break the fourth wall before we shipped the product. The customer knew the answer; they were just trying to make a point and get us to realize it ourselves. Aside from the tongue lashing, the customer did praise our products: we did a great job engineering our products, but we could do better. And the customer expressed how happy they were that we development engineers were coming to visit them -- they were glad to see the fourth wall being broken and hoped that our encounter would have a lasting impression. It did.

In development, we need to break the fourth wall -- the invisible barrier that separates the developers from the customers. We need to see how customers use our products, understand their issues and concerns, and internalize their pain. And nothing helps internalize pain than feeling it first-hand. It's not enough to read articles, or listen to stories second or third hand. I learned a lot from that customer visit; the people who listen and read my story might only learn a small fraction of it. We need to get all of our developers out in the field, meeting with customers directly, accompanying our Service Engineers on customer calls, shadowing our Tech Support Engineers in our Call Centers, and talking to our Field Engineers (who in many ways are our customers as well). When we break the fourth wall, both our products and our customers will benefit.


Copyright 2007, Robert J. Hueston. All rights reserved.

Monday Jan 08, 2007

YARPUI

How often have I heard a project leader say, "I'm not responsible for the current situation"? It's the moral equivalent of a three-year-old's, "I didn't do it!". To which the official parental response is, "I don't care. Clean it up anyway!"

There's a key difference between being responsible for a problem, and being the cause of the problem. A long time ago I learned an acronym that has followed me to this day:

    YARPUI: You Are Responsible for the Position U are In.

OK, it's a bit of a forced acronym, but it makes the point. No matter how we got into the current situation, no matter who caused it, no one is responsible for helping us, except ourselves. We own the responsibility of our own lives, and extending that to the workplace, we project leads are responsible for our projects, regardless of outside forces. I used to have a sign over my desk that just read "YARPUI," to remind myself of that everyday.

I'm not talking about always falling on one's sword -- the hollow "I take full responsibility" sort of statements that are usually followed by excuses. Those are acts of contrition, with all the sincerity of a person who says, "I'm sorry" when someone else bumps into them. Usually the person claiming full responsibility is not about to take responsibility; they are often about to resign, which is the antithesis of responsibility.

A person who is responsible takes control of the situation, even when they are not the cause of the disaster. They say, "To hell with how we got here. Let's move forward." No finger pointing. No excuses. No whining. Just action.

It can be difficult to deal with bad situations that are not your doing, that are out of your control. I had a young engineering project leader come to my office once, infuriated because one of the critical engineers on her project was pulled off to work on something else. "This isn't my fault!" she insisted. "No," I told her, "but you are resposible," and I introduced her to YARPUI. She needed to re-examine the plan and come up with options: How would this loss impact the delivery schedule? Who else could potentially join the team to backfill? Could features be dropped to save the schedule? And in the end, there was no value in trying to blame anyone for the situation; a simple statement like, "Due to staffing changes, we are replanning" would suffice.

In history, an excellent example is George Washington, the Hero of the Monongahela. After the disaster of Fort Necessity in 1754, Col. Washington's Virginia regiment was disbanded and he returned to civilian life. A year later, British General Braddock hired Washington as an aide. In 1755, during the Battle of the Wilderness at the Monongahela River, General Braddock was mortally wounded and the British officers and troops scattered in disarray, easy targets for the French and Indian warriors. Washington took responsibility for the situation. Even though he held no position in the British Army chain of command, he gave orders to the British offices, and rode up and down the lines restoring order and achieving an orderly retreat. Washington wasn't the cause of the situation -- he wasn't even an officer in the British Army -- but he took responsibility and placed his life on the line to extricate himself and his fellow soldiers from the situation.

At this point, when anyone who has worked with me for the any length of time is faced with a problem caused by someone or something else, they might stomp into my office, but as soon as I say, "YARPUI", they know that they're not going to find a sympathetic ear. They know they need to get right back to work, take responsibility for the situation, and plan around whatever disaster just happened. At least that's better than listening to another boring story about the Hero of the Monongahela.


Quote of the day: "The secret of success is sincerity. Once you can fake that you've got it made." -- Jean Giraudoux.
Copyright 2007, Robert J. Hueston. All rights reserved.

Thursday Jan 04, 2007

Deciding not to decide is not deciding

One of the most important aspects of being a leader is making decisions. Decision making is the tiller of a project; it sets the course and helps maintain forward motion. Good decision making is the cornerstone of good leadership.

Four Facts About Decisions

Over the last couple of decades I've learned a few things about decision making:

  1. The best person to make a technical decision is the person closest to the problem (typically the developer), since they know the most about the technical options and consequences.
  2. Development engineers sometimes feel that decisions need to be made by higher-ups, or that they are not empowered to make certain decisions. In these cases, the leader's job is easy: Listen to the developer, and when they're done you say, "Yes, you're right, I agree," and let them get back to work.
  3. No timely decision is ever made with perfect knowledge; if you know all the factors when you are making a decision, you're probably making the decision too late.
  4. A sub-optimal decision is better than no decision. Too often projects will languish needlessly, costing time and money, trying to make a decision, when the consequences of not making a decision are far more costly the selecting the less-optimal option.

From the above facts, one can draw the following piece of advice:

    Push decision making down to the people closest to the problem, empower them with the authority to make decisions, encourage them to make decisions, and support them even when their decisions turns out to be sub-optimal.

Good, Bad and Sub-optimal Decisions

You'll notice in the above that I use the term "sub-optimal" to describe a decision, instead of "bad." A sub-optimal deision is a decision which is made based on the available information and appears at the time to be the best decision, but once the future is revealed it turns out not to be the best decision. The only way to make an optimal decision with certainty is to have perfect knowledge in advance, which is practically impossible, and typically means there isn't really a decision to make (e.g., Should I get round wheels on my car, or square ones? I can confidently make an optimal decision because there really isn't much of a decision to be made). Most decisions are based on partial knowledge, and while one can take steps to maximize the probability that they will make an optimal decision, the optimality of the decision cannot be known until the future, and sometimes, it can never be known.

A bad decision is one which is made without sufficient dilligence, or is made despite indications that it is not the best choice. There are several traps that a decision maker can fall into that lead to bad decisions. [See "The Hidden Traps in Decision Making," John S. Hammond, et. al., Harvard Business Review.] Those traps include:

  • Anchoring: Giving extra weight to the information that supports your preferred option. You need to give all options, even late arrivals, the same dilligence and consideration.
  • Status Quo: Choosing an option because it maintains the current situation. While change does involve cost, other options may provide a higher pay-back.
  • Sunk Costs: Using the time and money you've already invested in one option to justify continuing with the same faulty option.
  • Confirming Evidence: Looking for evidence that supports a particular option, rather than looking for objective evidence.
  • Unrealistic Forecasts: Overestimating the gains or underestimating the costs of an option.

An Example

The four facts of decisions can be demonstrated with a single example from my recent past. I had a small team of engineers working on a software design, and they were stuck deciding between several approaches. After one week, they needed more time. After two weeks, I knew they needed help.

I set up a meeting, and we discussed the options. We created a pros/cons table, also called a "consequence table," on the whiteboard. I like to use the first column for the criteria used to judge the options, such as ability to meet requirements, code size (cost to implement), code complexity (cost to test and maintain), performance, and so forth. The other columns are for the various options, and the cells show how well the option performs compared to the criteria. [For a more complete description of the process, see "Even Swaps: A Rational Method for Making Trade-Offs" by John S. Hammond, et. al., Harvard Business Review, March 1988.]

We filled out the consequence table and eliminated all but two options which were balanced very closely, options A and B. Nothing we could do made either option stand out as a winner. At the end of the session I looked at the whiteboard and picked option A. I could make that decision easily because:

  • I trusted my engineers -- they were thorough and dilligent in their analysis, had fully researched the options, and they knew the consequences of each option.
  • The two options were so close that my two engineers could not differentiate which option was better. If they had to choose, they would have flipped a coin, so I flipped a coin in my head and picked one.
  • Since both options were equally good, it didn't matter which option I picked. Neither choice would be a "bad" decision; at worst, I might pick the option that was a little more work and therefore a less optimal.

With a decision made, the team got to work on the detailed design. After a couple of weeks, one engineer came back with a problem: Option A turned out to be far more complex than they had initially thought -- we didn't have perfect knowledge during the consequence table analysis. We had a new decision to make: Stay with option A or change to option B. We did the analysis, and the consequence of staying with option A, even with the effort already expended on option A, would still be more costly than switching to option B. So we decided to change the design. In just a few days, the design had been reworked to use option B.

Note that the time it took to change from option A to option B was small. It had cost us weeks of time trying to decide on an approach, and in the end it only took a few days to switch from one option to the other when we had more knowledge. In addition, I still maintain that the original decision was a "good" decision; it was made analytically, with the knowledge available at the time. I also believe the second decision -- to change options -- was also a good decision since it too was based on the available knowledge at the time.

Rules To Decide By

When making decisions, we should follow a simple set of rules:

  • Analyze: Don't make decisions blindly; do "due dilligence" to gather pertinent data before making a decision. Use a "consequence table" or some other analytical method to eliminate options which are clearly sub-optimal. This is sometimes called "data driven decision making" which implies that having data is sufficient to make a decision; I prefer "analytical decision making".
  • Intuit: Don't ignore your intuition, especially if you're a subject matter expert. Often, your brain is subconsciously doing an analysis and coming up with the best option faster than you can do it consciously.
  • Act: When you have sufficient information, make a decision, make it quickly, make it stick, and move forward. Don't revisit a decision, unless there is new, significant information. Often a person who doesn't agree with the decision will try to revisit the decision to get you to change your mind. But changing a decision without new information is effectively not making a decision, and worse, it causes people to lose confidence in all of your future decisions.
  • Adjust: Don't become too attached to your decisions. If facts come to light that a previous choice should be overruled, be open to making a new decision.

Wednesday Dec 27, 2006

Top-Down or Bottom-Up

Strategy is top-down planning. Tactics is bottom-up planning. Both are useful and necessary planning techniques, and an iterative process is often needed to align strategic plans with the real tactics available.

Many times I'm approached by a frustrated engineer who was just handed a vague project description, a hypothetical list of engineers, and a highly aggressive delivery date, and asked to produce a detailed schedule and project plan. They're usually bewildered, and sometimes furious. They don't understand how a senior manager can come up with a delivery date before any detailed planning has started, and before the requirements are well defined. What they don't realize is that the high-level plan, with vague requirements, rough cost and target delivery date, is an item in a strategic plan. They've been handed a top-down plan -- a natural process of the flow-down of corporate goals and business unit objectives -- and they've been asked for the bottom-up plan. While the top-down plan lacks details and specifics, it is useful to have a framework for creating the bottom-up detailed plan; the bottom-up plan is then used to adjust the overall product strategy, and may result in changes in strategy.

In one particular case, an engineer had been given vague requirements and sent off to come up with a tactical project plan. She worked with me to flesh out the requirements, tasks, staffing, and schedule. In the process, we discovered that the project was far more complicated than it originally appeared -- there were dependencies on other business units, dependencies on new hardware features, and in order to avoid any chance of silent data corruption the software needed to be very complex. When the cost and schedule was far beyond what the company was willing to pay, we looked at features to drop and ways to bring in the schedule. In the end, the best proposal would simply not meet the strategic plan. She presented her results to upper management, and told them they had to change their strategic plan; they either had to allocate more money, more time, or drop the feature from their strategic plan. They chose to drop the feature. The engineer felt like a failure, but in fact she was successful in her task -- she demonstrated that the strategic plan was not achievable, and with the detailed analysis she had done, she convinced management to change their strategic plans. That's far more valuable than just saying "yes sir" and then a year later failing to deliver the feature.

In another case, I had a senior engineer come to me because he was not given a top-down plan. He was told that a new technology was going to be available from third-party vendors, and he should come up with a plan to deliver the software support. He had to come up with both the strategic plan and the tactical plan, something he had never done before, and he had no idea where to start. He could do all the work himself, but it would take five years; or he could use a large staff and get all the features done quickly. But without a top-down plan, without a strategy, he had no frame of reference.

So we sat down and started to work out the strategic top-down plan:

  • Schedule
    When would we want to deliver the product? Turns out the hardware that supports the new technology would be shipping in 18 months, but it was unlikely it would be widely used until 24 months from today. So the strategic delivery date would be in 18 months, with a fallback plan to deliver prototype software in 18 months with a solid product no later than 24 months.
  • Requirements
    The things you could do with this new technology were vast. So we decided to triage the requirements into must-haves, nice-to-haves, and non-requirements. Since this new hardware would be replacing existing hardware, the "must have" requirements were to allow customers to transition smoothly from current hardware to new hardware, without any loss of functionality, but with significantly improved performance. The ability to take advantage of new features in the hardware were "nice to haves". And a small subset of features weren't even going to be supported in the hardware next year, so those became non-requirements.
  • Cost
    The main cost was going to be staff, and the number of engineers could be small, or large. So I asked this engineer what would be "optimal." There were three key areas that needed development and he could probably do any one of them in 18 months; although, it was unlikely that the project would get three senior engineers. But one of the pieces was big and could be split in two and handled by two junior engineers. So the answer was three or four development engineers for 18 months.

We now had a top-down strategic plan: Three or four engineers would deliver the full set of legacy features plus some new features, and would deliver the product in 18 to 24 months to align with hardware availability. This engineer was actually surprised how our analysis had produced a top-down plan that sounded a lot like the sort of plan his manager usually gave him (which he had always assumed was just made up without any thought at all).

With the top-down plan, the engineer was able to work on the bottom-up plan, defining tasks, assigning tasks to his ficticious engineers, and seeing which requirements could be accomplished in 18 to 24 months. At first, he was unable to get the tactical plan to exactly match the strategic plan, but with the flexibility in the strategic plan, he was able to adjust both plans until they did matched.

This isn't too different from the way we approach projects in our everyday life. Last year I had some work done on my house. I started with a vision, a strategic plan:

  • I had a home improvement loan (cash, burning a hole in my pocket).
  • I had to replace the siding and so some repairs (must-have requirements), but I also wanted to add a bay window and a back porch (nice-to-have requirements).
  • I wanted the work finished before the start of Summer (I didn't want construction waste around the house while children were playing in the yard).

Then I set out to define my tactics. I could do the work myself. That would meet my budget, I wasn't sure I could actually manage lifting a bay window, but I knew I could never finish before Summer (of this year, at least). So my wife quickly eliminated that tactic and had me contact a bunch of contractors, all with good reputations and a long list of references. I sat down with each one and explained my strategic vision, what I wanted (needs plus desires), when I wanted it (I shaved off a few weeks), and about how much I could afford to spend (I gave a low-balled figure, of course). Then each contractor provided a proposal:

  • The first contractor could meet price and features, but was booked until the end of September.
  • The second contractor proposed vinyl siding over the existing siding, and an aluminum window (which wouldn't match the existing wood windows). The cost was low and the schedule was fantastic, but it didn't really meet my quality requirements.
  • The third contractor was a meticulous craftsman. He presented detailed materials lists, estimates, and sketches of what the house would look like when he was done, but his cost estimate was far more than I could afford.
  • The fourth contractor was a little high on price, but he was able to get the work done by the end of June, and would use high quality cedar siding and top-of-the-line windows.
Each proposal was valid, and was a sincere attempt to meet my requirements using different tactics; I decided the forth contractor had the tactics that would best achieve my strategy. I felt bad that I had all of these contractors waste time doing estimates and proposals, but I needed the options, especially since none of the proposals met all my constraints. And I had to find a proposal I could live with.

In a large company, the engineering project lead often plays the role of the contractor. Or more specifically, the project lead represents all of the contractors. The engineer will get a strategic plan which is unachievable in totality, so they will come up with their best guess at a detailed plan. Management will tell them the schedule is too far out, come up with another plan. The second plan will meet schedule but cost too much. The third plan will drop too many features. Finally, the project lead will come up with a plan that doesn't meet the original requirements, schedule or costs, and management can either accept it, or change their strategy. It can be a frustrating process for the engineer (just as I'm sure it's frustrating for a contractor to put together a proposal and still be passed over); and it can be an equally frustrating process for management (and the homeowner). But it is the process most people use to synergize their unachievable strategic plans with the cold hard facts of the tactical situation.


[If you're interested in a good home improvement contractor in the Boston area, let me know. I have a few references.]
Copyright 2006, Robert J. Hueston. All rights reserved.

Tuesday Dec 26, 2006

GOST In The Machine

The terms "goal," "objective," "strategy" and "tactic" are often confused and mis-used. I wanted to write a couble of short entries on the GOST terms, and clarify their use. As they say, an ounce of example is worth a pound of advice, so I'll be using examples primarily from the War Between the States.

Goal

A "goal" is an endpoint, a future. In war, it might seem obvious that the goal is to win the war. But a goal of "winning" is meaningless, and itself can lead to defeat (I believe many wars have been lost because people were too busy trying to "win" and lost sight of the goal of winning). One must define what winning means in order to define a goal. In the Civil War, for example, the goal of the Union was very different from the goal of the Confederates. The Union's goal was to quash rebellion in the South and force the Southern states to rejoin the union. The Confederates' goal was to defend their secession, to repulse Northern aggression. Both camps had a goal, and marshalled their forces to achieve the goal. The military on both sides could use the stated goal to organize their objectives, strategies and tactics around achieving the goal.

An organization's "goal" is often communicated through a mission statement. Some people feel mission statements are pointless -- propoganda, cheerleading. Often they are right, but done correctly, they can also be a powerful tool.

One organization's mission statement was "Deliver wow products that elate customers." Sounds catchy, but it's about as useful as having a military goal of "win the war." It is vague and unmeasurable. The mission statement cannot be used to judge the objectives of the organization; it cannot be used as a decision making tool; and the organization cannot know when they have achieved the goal. Could you imagine product managers rating their products, and caneling all "neat" and "cool" products and only spend their effort on the "wow" ones? Or canceling a project because it will only "delight" customers but it won't "elate" them?

In constrast, one small aerospace company had a boring-sounding mission statement which I believe to be one of the best I've heard. It went something like: Be the number one or two supplier of aircraft engine sensors and engine status indication systems. It told the company:

  • They were not going to develop other sensors.
  • They were not going to develop control systems.
  • They were not going to develop other indicators or cockpit displays.
  • They weren't going to enter the market of automotive engines, or land turbines.
  • There were going to develop, manufacture and sell sensors and indicators for airplane engines.
  • And they weren't done until they were the number one or number two supplier in that market.

Not very catchy, but it did create a frame of reference for directors to review their product development plans and portfolios and make serious decisions. It also provided measurable criteria to know when the goal was achieved.

Objective

An objective is a means by which you attain a goal; it's a step that must be completed in order to achieve the goal. Using the Civil War analogy, the North had a number of objectives in order to achieve their goal. One objective was called the Anaconda Plan: Close the Southern ports to prevent the sale and export of argicultural products to Europe, and the importatation of industrial products and weapons. Of course, England was a key trader with the South, and a second objective of the North was to avoid going to war with England -- war with England would not help achieve the goal of reunification. So the original Northern objective could probably be better stated as: Block Southern ports without risking a war with European powers. With that, an admiral could understand the objective and devise strategies to achieve it. Several strategies that support this objective include:

  • Formally declaring a "blockade" (the 1861 "Proclamation of Blockade Against Southern Ports"). A formal blockade is recognized by international law, and gave the Union Navy certain powers to enforce the blockade with neutral ships.
  • Enforcing the blockade if seaports with a number of new, fast Union Navy ships.
  • Control the Mississippi River with new river gunboats and the support of Army troops

and so forth. The objective also ruled out the strategy of invading Cuba, a primary port in Southern trade routes -- doing so would have helped blockade the South, but would have led to conflicts with Europe, and could not have been defended by the Proclamation of Blockade. Knowing the objective, one can come up with a set of strategies, and judge if the strategies actually support the objective.

Businesses also have objectives to meet their goals. Selling into a specific market or market segment might be an objective. And if you have a well-defined goal, it should be clear which markets to attack. Without a goal (or with a poorly conceived goal), it becomes much more difficult to decide where to apply your resources. Some effort may be spent on fruitless products; other opportunities may be missed. In some cases, a poor goal at the corporate level can lead to conflicting objectives within business units -- one business unit decides to sell a product that competes with another business unit, canibalizing margins.

A clear and measurable goal, supported by a set of clear and measurable objectives make it possible to create a set of strategies and tactics to be successful. Without goals and objectives, an organization is just groping in the dark for a purpose.

Strategy and Tactics

Strategy is the top-down planning needed to achieve an objective. Tactics are the bottom-up planning to achieve a strategy. More on that in my next entry.


Copyright 2006, Robert J. Hueston. All rights reserved.

Friday Dec 22, 2006

Plagues, Peoples and Process Change

Evolution involves mutation combined with natural selection to decide whether the change was successful and should be continued, or not and should die off. In the book "Plagues and Peoples," William H. McNeill makes the case that societies of humans can evolve, like organisism but at a faster rate. Societies can change (mutate), and if the change is successful, it gets integrated into the society. For example, with a climate change that brings cooler weather, a mammal would require many generations to evolve more hair and a thicker layer of fat as methods to defend against the cold. But as early human communities moved northward from Africa into Europe, they quickly evolved methods to keep warm -- their society mutated, and when the change was beneficial, it was kept and the society prospered. As a result, societies in northern areas that built shelters, wore animal hides, and started eating different foods thrived and dominated societies that could not evolve fast enough.

Other societal evolutions are more subtle. Most people in the West have an apparently innate fear of rats, but little or no fear of other rodents such as squirrels. The fear of rats is probably rooted in the fourteenth century Bubonic Plague pandemic, with the belief that black rats carried the disease. It would have taken centuries to develop an immunity to Plague; however, Western Society evolved an abject fear of rats, which caused people to avoid rats moreso than other rodents, and may have helped reduce the spread of the disease. Today in the US, the risk of dying from Plague is extremely low, but like the human appendix, the vestigial fear of rats continues long after its applicability has ceased.

An organization or project team is a human society, and evolves like an organism. And just as every organism reacts differently to external stimuli, so does every organization react differently to change.

Process change requires a sort of evolution within a team. If the change is handled poorly, it will be rejected by the team and fail, and the team may evolve a vestigial fear of the process, or of process change in general. One difference between natural evolution and organizational evolution is that natural evolution involves random mutations; organizations can think and can therefore make intentional changes.

There are many methodologies for implementing change. I actually think the process is quite obvious once you treat the team with respect. I've boiled the process down into the following simple steps: think, ask, envision, implement and observe:

  • Think: Decide if and why a change is needed. All changes involve cost (non-recurring costs to define and implement the new process, plus recurring costs in terms of overhead to adhere to the process). One must ensure that the cost of a new process will yield clear improvements in efficiency that outweigh the cost of the change; there must be a big payoff. For the managers in the audience, compare the ROI of the process change against other ways you could be spending your money. For the rest of us, look before you leap.
  • Ask: Involve others to decide what to change. People who own the process and live with it every day are in the best position to identify areas that need to be changed and those that shouldn't be touched. If people have input to the change, and they see that their input is respected and acted upon, they will be more willing to accept change. And it is just as important to get input from people who resist the change as from people who support the change.
  • Envision: People can fear change, so when the change is finalized, it is important to show them how things are going to change, and how they will benefit from the change (and if you can't show people how they'll benefit from a change, then perhaps the change isn't a very good idea). Remember to include people using the process -- the process "owners" -- as well as people outside of the team that might be affected by the change -- the process "suppliers" and "customers".
  • Implement: Implement the change. Use all your project planning and execution skills to ensure a smooth roll-out.
  • Observe: Watch what happens, monitor the effects of the change and measure the impact.

The above steps are a cycle -- after observing a change, one may need to go back and think some more. And it is always acceptable to move backward through the process. For example, one may think there is a way to improve a process, but after asking the process owners, it is clear that a change is not appropriate and you need to go back and think some more.

And always keep in mind that process change is evolutionary. If the change is successful it should be continued, if not, it should be allowed to die off and replaced by a different process.

Quote of the day: "Rules are intended to provide a thinking man with a frame of reference." -- Karl Von Clausewitz


Copyright 2006, Robert J. Hueston. All rights reserved

Thursday Dec 21, 2006

Process Is The Hobgoblin Of Little Minds

Everyone follows a process in everything they do. But many people follow processes without every realizing. I once worked with a very talented analog design Engineer, Hal, who could look at a problem and immediately tell you the best solution. He couldn't tell you how he came up with the solution; but he always had the right answer. I suspect Hal followed a very rigorous process, in his head, instinctively, subliminally, and very quickly.

If we could clearly identify the steps that Hal followed informally, we could document (or formalize) a process for everyone to follow. It might look something like: write down all the requirements, brainstorm and list all possible solutions, then rate and rank each candidate on how well it meets the requirements and select the candidate with the highest rank. With a formalized process, everyone could make decisions as well as Hal; well, almost as well, since the transformation from instinctive process to formal process to execution is never 100% efficient. But we could raise everyone's proficiency. Everyone's, of course, except Hal's.

If we forced Hal to follow this process, he would waste time writing down all the requirements, identifying subject-matter experts, holding brainstorming sessions, writing all the options on pastel-colored 3x5 post-in notes and filling the walls of some conference room, drawing four- and five-dimensional diagrams relating all of the requirements to all of the options to all of the costs, and selecting the optimal solution based on a secret ballot of all stakeholders. What used to take Hal a few seconds or minutes of cogitating would now take him days just to get a conference room reservation. Hal's productivity would drop by orders of magnitudes.

The purpose of formal processes should be to compensate deficiencies while not hindering proficiencies too much. When applying a new process (or changing an existing process) there needs to be a net gain, a reason for the change, a "big payoff." A "good" process may not be good for everyone. And every process needs to be flexible so that it can be adapted and taylored to the organization, or the individual.

Formal processes and process change is not the panacea for all problems. To paraphrase an oft mis-iterated quote of Ralph Waldo Emerson:

    A pointless process is the hobgoblin of little minds.

[Appologies to Mr. Emerson.]

The first step in improving processes is to understand the processes that are in place, formal or informal. If there are no formal processes, the first, critical, step is to formalize the existing processes, exactly as they exist, without embellishment. The last thing you want to do is formalize a process and change the process at the same time.

For example, I once worked at a small company with a team that performed very well with no formal processes. None were needed. As the team grew, new members needed to understand how things got done. So we set out to document the software change control process. The first attempt was a bit over-zealous, and resulted in a dozen-step process, with reviews, checklists, approvals, etc., which simply did not exist (and did not need to exist). That version was torn up and replaced with what was really happening:

  • Author makes code changes.
  • Author decides if other people should review the code. Based on feedback, author decides whether changes are necessary.
  • Author tests code as applicable.
  • When author is satisfied that that code is sufficiently reviewed and tested, author commits changes.

The process made clear what was expected of the developer, without creating artificial hurdles or pointless documentation. This process worked fine, and continued to work well for a long time, because the team members were very conscientious and could be trusted to do the "right thing". If you trust your team members, then your processes do not need to be onerous; they do not need to include checkpoints and approvals (and if you don't trust your team members, perhaps you need new team members).

Much later, when the team had grown and the rate of changes (and the rate of mistakes) increased, we did go back and change the process, evolve the process, in ways that improved overall efficiency at a modest cost. But we could not improve the process until first we had formalized the process.


Copyright 2006, Robert J. Hueston. All rights reserved

Tuesday Dec 19, 2006

Methodologies, Processes and the Silver Bullet

According to most dictionaries, the definitions of "methodology" and "process" are virtually identical; however, some people, engineers in particular, ascribe very different meanings to the two words.

Processes are imposed from above; methodlogies are adopted from below.

Processes are obeyed; methodologies are adhered to, sometimes religiously.

Processes are onerous; methodologies are liberating.

Processes are often circumvented; methodologies are staunchly defended.

But I think the basic difference is:

Processes are someone else's methodologies.

If the government had only devised a methodology instead of a process for paying taxes, people would be wearing tee-shirts extolling the virtues of the "IRS Methodology".

The Silver Bullet

A couple of years ago I was fortunate to attend a presentation by Sarah Sheard, Chief Technologist of the Systems Engineering, Systems and Software Consortium. The presentation was entitled "Systems Engineering and Silver Bullets" which told a "fable" that went something like this:

    The CEO of a company is unhappy with all of the existing methodologies and decides to come up with a methodology that uniquely works for his company. He involves his managers and employees to come up with this new process. Productivity and morale skyrockets.

    Several other companies are impressed by the increased productivity. They dispatch senior managers to study the new methodology, and carefully adapt it to their own companies. These companies see positive improvement, thus confirming the new methodology

    Many more companies decide to adopt the new method. They assign middle managers to implement the new methodology, as best they can, in a short period of time, knowing only what they've read in articles. Managers, in turn, force the changes onto their employees without accepting feedback. Improvements are marginal, and morale sinks.

    The lack of gains and decrease in morale are cited as examples that the new methodology does not work. Books are written debunking the new methodolgy.

    The executive of another company is unhappy with the methodology and decides to come up with a methodology that uniquely works for his company...

[A more complete version of the fable can be found at http://www.stsc.hill.af.mil/Crosstalk/2003/07/sheard.html.]

The moral of the story is obvious -- there is no silver bullet, no single methodology or process which will improve efficiency at every organization. A methodology must be embraced from the top of the organization to the bottom, and care must be taken to taylor any methodoogy to the organization, based on measurable performance improvements and individual feedback. Only then will a methodology really be successful. On the other hand, if you buy a book on a methodology and force people to follow it, you will probably fail.

I have seen this many times -- what works for one organization does not work for another; this can even be true for two workgroups in the same company. Methodologies must not be rigid; they must allow for tayloring to an organization, and to workgroups within an organization.


Copyright 2006, Robert J. Hueston. All rights reserved

Monday Dec 18, 2006

The Golden Axiom of Learning Leadership

Since I started leading projects back in the 80's, I've learned one golden axiom that has proven true in all cases. In the past I have generally only shared this with my closest peers:
  • You only learn to succeed from your mistakes, and others' successes.

The first part, learning from your mistakes, is well know.

The second part of the axiom, learning from others' successes, is less well appreciated, but I find it to be just as true. People learn from watching and mimicing behavior. If you watch a person performing well, succeeding, handling problems calmly and analytically, then you will learn those behaviors yourself. If it weren't true, then we would have no hope of learning except through mistakes, and I'm pretty sure that would have meant the end of the human race a long time ago (and Harvard wouldn't be able to charge $35K per year tuition). Clearly we must be capable of learning from good examples.

In addition to the golden axiom, there are a couple of subtleties that most people overlook. I call them corollaries:

  • Corollary #1: You don't learn to succeed from your own successes.
  • Corollary #2: You don't learn to succeed from others' failures.

Corollary #2 goes hand-in-hand with the second half of the golden rule. Humans learn from watching others. We can learn to succeed from successful people, and we can learn to fail from failures. But we don't learn to be smart by watching fools.

Studying a child can be a good way to learn about adult behavior. Children do things naturally, in an uninhibited fashion; adults often try to obfuscate their actions and motives, but they really behave in much the same was as children.

To demonstrate corollary #2, consider Caillou. Caillou is a cartoon on PBS, about a four-year-old boy of the same name. My 3 1/2 year old daughter loves the show. When Caillou is good, she mimics his behavior and she is good; when Caillou is bad, she still mimics his behavior. As an example, my daughter likes baths. But one day Caillou didn't want to take a bath. He cried and whined, but once he got in the tub, he loved it. Clearly the message was supposed to be, "Don't cry at bath time because baths are a lot of fun." But for a week after watching that show, my daughter would cry and whine at bath time. She had observed the poor behavior of Caillou on TV, saw that it was wrong, but still she modeled her own behavior on what she observed in others.

I've seen this same sort of thing in adults as well, sometimes even in myself. When I was a new college grad I had one manager who would pass out a list of tasks that each engineer should be working on that day. My peers and I just hated the micromanaging -- we used to work on low priority tasks first just to tick off our manager, and we set up a dart board in the lab where we posted the daily task lists and took aim. A few years later, I was the leader of a fairly large project, and was having trouble keeping all the work straight, when I found myself writing up daily task lists for my team members, and was met with a small mutiny. When I was faced with a challenging situation, I mimiced a behavior I was familiar with, despite knowing how poorly the approach worked. But that was the "training" I was given, the only "tool" I had. I only learned the lesson once I had made the mistake myself.

[Corollary #2 poses a unique problem when writing about leadership. An author, in fact any mentor, and even a parent, wants to share their mistakes, desparately hoping others will avoid making the same mistake. But it's a futile goal, and in fact, it can backfire quite badly as the observer may be drawn to make the same mistakes. Ever tell a child, "Don't touch that!" Odds are, their little hand will immediately start to reach out for it. I suspect at this very moment there are a few managers out there who read this blog and are thinking, "Hmm. Daily prioritized task lists? Maybe I should try that with my team." And probably at least one engineer is going to whine tomorrow morning when he has to take a shower. In the future I will try to avoid dwelling on mistakes, and concetrate on successful behaviors.]

Knowing the golden axiom and its corollaries, we can develop a set of personal principles we can take with us:

  • Don't kick yourself when you make a mistake; you're now smarter than you were yesterday.
  • Don't brag when you succeed; you got lucky and you're just going to screw up again sometime soon. Success should be more humbling than failure.
  • Watch those around you who are successful; their success can rub off on you.
  • Ignore failures; their failures can rub off on you, too.
  • Don't gloat at others' failures; they just got smarter, and you just might have gotten dumber.
  • And leaders, mentors, and parents should always teach how to be successful, but let people make the same mistakes that you and everyone else have already made. Those mistakes are part of what make you successful.

There's also a couple of rules that organizations need to internalize:

  • If you employ poor or even mediocre leaders, not only will your projects suffer today, but your junior engineers will learn poor leadership habits. Poor leadership is infectious and can destroy the health of any organization.
  • On the other hand, if you employ excellent leaders, it will help junior engineers grow up to be good leaders themselves.

Copyright 2006, Robert J. Hueston. All rights reserved
About

Bob Hueston

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today