Saturday, August 2, 2008

The Experience Trap - HBS INSEAD

The Experience Trap - HBS INSEAD

As projects get more complicated, managers stop learning from their experience. It is important to understand how that happens and how to change it.

by Kishore Sengupta, Tarek K. Abdel-Hamid, and Luk N. Van Wassenhove

If you were looking for an experienced manager to head up a software development team, Alex would be at the top of your short list. A senior manager, Alex has spent most of his career running software projects. His first responsibility was developing scientific software for NASA, and since then, he has overseen ever more complex projects for commercial enterprises and government agencies.

Alex was typical of the several hundred project managers who participated in our research initiative on experience-based learning in complex environments. We invited him to test his skills by playing a computer-based game that entails managing a simulated software project from start to finish—making the plans, monitoring and guiding progress, and observing the consequences. We set goals for him: finish on time and within budget, and obtain the highest possible quality (as measured by the number of defects remaining).

Alex’s decisions and outcomes were representative of the group as a whole. He started with a small team of four engineers and focused mostly on development work. That tactic paid off in the short run. The team’s productivity was high and development progressed quickly. However, when the size of the project grew beyond initial estimates, problems cropped up. Because Alex still chose to keep the team small, the engineers had to work harder to stay on track. Consequently, they made many mistakes and experienced burnout and attrition. Alex then tried to hire more people, but this took time, as did assimilating the new hires. The project soon fell behind schedule, and at that point Alex’s lack of attention to quality assurance in the early phases started to show up in snowballing numbers of software errors. Fixing them required more time and attention. When the project was finally completed, it was late, over budget, and riddled with defects.

After the game, we asked Alex to reflect on the simulation. Did the project’s growth take him by surprise? Was he shocked that the number of defects was so high or that hiring became difficult to manage? Alex—like most of his fellow participants—replied that such surprises and shocks have, unfortunately, become regular occurrences in most of the projects in which he’s been involved.

Quality and personnel headaches are not what most companies expect when they put seasoned veterans like Alex in charge of important projects. At this stage of their careers, they should know how to efficiently address problems—if not prevent them altogether. What we discovered in our experiments, however, was that managers with experience did not produce high-caliber outcomes. In our research, we used the simulation game to examine the decision processes of managers in a variety of contexts. Our results strongly suggest that there was something wrong with the way Alex and the other project managers learned from their experiences during the game. They did not appear to take into account the consequences of their previous decisions as they made new decisions, and they didn’t change their approach when their actions produced poor results.

Our debriefings indicated that the challenges presented in the game were familiar to the participants. We asked them to rate the extent to which the game replicated their experiences on real-life projects on a scale of 1 to 5, where 5 meant “completely.” The average score was 4.32, suggesting that our experiments did accurately reflect the realities of software projects. So, though the managers had encountered similar situations on their jobs in the past, they still struggled with them in the simulations. We came to the conclusion that they had not really learned from their real-life project work, either.

In the following pages we’ll identify three likely causes for this apparent breakdown in learning, and we’ll propose a number of steps that organizations can take to enable learning to kick in again.
Why Learning Breaks Down

When anyone makes a decision, he or she draws on a preexisting stock of knowledge called a mental model. It consists largely of assumptions about cause-and-effect relationships in the environment. As people observe what happens as a result of their decisions, they learn new facts and make new discoveries about environmental relationships. Discoveries that people feel can be generalized to other situations are fed back, or “appropriated,” into their mental models. On the face of it, the process seems quite scientific—people form a hypothesis about a relationship between a cause and an effect, act accordingly, and then interpret the results from their actions to confirm or revise the hypothesis. The problem is that the approach seems to be effective only in relatively simple environments, where cause-and-effect relationships are straightforward and easily discovered. In more complex environments, such as software projects, the learning cycle frequently breaks down. In the experiments we carried out with our study participants, we identified three types of real-world complications that were associated with the cycle’s breakdown.

No comments: