Gospels of Failure

The reports on three high-profile disasters offer rich lessons in why organizations fail — and how not to.

Here’s a riddle. What is the only business book ever to spend more than 19 weeks on the New York Times best-seller list, sell more than a million copies, and be nominated for the prestigious National Book Award?


Give up? It’s The 9/11 Commission Report, which is shaping up to become the surprise hit of the last year. It’s a trick question, granted: The 9/11 study isn’t a traditional business book; at least, it’s not the overhyped, how-to, warmed-over fluff that all too often dominates the genre. But the commission’s report is a careful analysis of flawed organizations, and of the devastating effects of siloed cultures and ineffective management.

So it’s finding fans in pockets across the business world. Felix Barber, a Zurich-based senior adviser to the Boston Consulting Group, says he thought the report was “one of the best books on organization I’ve read.” Ian Mitroff, a professor at the Marshall School of Business at the University of Southern California, observes “virtually every page is about flawed organizations.”

And Jamie Gorelick, one of the 9/11 commissioners, says she has spent a “tremendous” amount of time talking to business groups and senior management teams since the book’s release. Their intense response to the commission’s 567-page volume, already in its sixth printing, has startled her. “There are bunches of people I’ve come across who have read the whole thing cover to cover and carry it around with them,” she says. “For some people, it’s the Little Red Book. It’s weird. I expected people to read it. I didn’t expect people to inhale it.”


Really, the fervor isn’t so surprising. We live and work in a world where organizational failure is endemic — but where frank, comprehensive dissections of those failures are still woefully infrequent; where success is too easily celebrated and failures are too quickly forgotten; where short-term earnings and publicity concerns block us from confronting — much less, learning from — our stumbles and our blunders.

Now we have an opportunity to buck the tide — in the form of three brutally honest anatomies of catastrophe, none of them directly from the business world, published in the last year and a half. The 9/11 Commission’s gripping book, the Columbia Accident Investigation Board’s thorough report of the space-shuttle tragedy, and the New York Times‘ reflective account of the scandal involving the fabrications by reporter Jayson Blair are windows on our own organizations’ vulnerabilities. With the glaring clarity of hindsight, all of these tragedies are striking reminders that while individuals can be quite adept at picking up on hints of failure in the making, organizations typically fail to process and act on their warnings.

The FBI field agent warning about terrorists in flight schools; the engineers requesting better photos of the space shuttle’s wing after it was struck by debris; the department editor who wrote a memo warning that Blair shouldn’t be writing for the paper — all these individuals were sending signals of impending disaster. “The biggest screaming headline is that all the knowledge needed was already inside,” says Jeffrey Sonnenfeld, associate dean of executive programs at the Yale School of Management. Or as George Tenet, the former director of central intelligence, told the 9/11 Commission, “The system was blinking red.”


Reacting to those weak signals — to the information trapped within the system — may or may not have prevented these catastrophes. Indeed, we cannot begin to sift through every cause that led to what are unthinkable disasters. But each report stresses one of three factors — imagination, culture, or communication — as the greatest culprit in ignoring, trapping, or suppressing crucial warning signs. These were the factors that made the blinking red signals so hard to see.


Institutionalizing Disruptive Intelligence

On September 4, 2001, just before a meeting of cabinet-level officials called the Principals Committee — the first such gathering under the Bush administration to address al Qaeda — Richard Clarke sent a fervent personal letter to Condoleezza Rice, then the national security adviser. The real question, the former counterterrorist coordinator insisted, was “Are we serious about dealing with the al Qida [sic] threat? . . . Is al Qida a big deal?” It seems preposterous in hindsight, but it’s a breathtaking illustration of how, even seven days before September 11, government leaders still were underestimating the severity of the threat Al Qaeda posed.


The 9/11 Commission calls this lack of imagination “the most important failure” of leaders in the September 11 tragedy. A sort of “cultural asymmetry” had taken hold, blinding leaders to the gravity of the danger. “To us, Afghanistan seemed very far away,” the commission members wrote. “To members of Al Qaeda, America seemed very close. In a sense, they were more globalized than we were.”

Here, then, was the world’s only superpower being threatened by a fanatical, remote, bootstrapped organization. Forgive the analogy, but that sounds remarkably like the innovative startups we know from the business world. Think Napster and the recording industry, or Linux and Microsoft. The 9/11 Commission report may call this “cultural asymmetry”; it also sounds a lot like the concept of disruptive innovation.

Yet cultural asymmetry is just one of the commission’s powerful ideas about the failure of imagination. Perhaps the most startling is the concept of “institutionalizing imagination”: “It is therefore crucial,” the commission writes, “to find a way of routinizing, even bureaucratizing, the exercise of imagination.”


Bureaucratize imagination? Could this be the most oxymoronic statement ever made? Not really. “We don’t mean, ‘Okay, guys, let’s all join hands and be more imaginative,’ ” says Gorelick, the former 9/11 Commissioner. “You don’t really institutionalize imagination.” Rather, she says, “you put in place systems that allow the imagination that’s naturally occurring to actually break through.”

The commission’s report proposes a process for doing so. It sounds a lot like scenario planning, a common business process that provides a framework for imagining multiple potential futures. Although the intelligence community had analyzed possible surprise attacks for years, these methods were not used consistently. The Counterterrorist Center did not perform “red team” analyses from the enemy’s perspective that likely would have predicted the use of airplanes in suicide attacks. Nor did it develop indicators for monitoring this kind of attack — or potential defenses against it.

Even if your organization is already using scenario planning, leaders tend to focus on the probable rather than the disruptive. “Every president wants one scenario,” says Peter Schwartz, whose firm, Global Business Network, has done scenario planning with everyone from Ford Motor Co. to, well, the CIA. “In every situation, it’s tell me what will happen. When you ask that question, it forces the intelligence community to come up with one most likely scenario. And the most likely scenario in every situation is more of the same.” To get top management to listen to more than one outcome, Schwartz psychologically tricks them by presenting the most credible outcome first. “If you don’t give them that scenario first, then they will reject everything until they hear the scenario they already believe in,” he says.


Trying to imagine future scenarios — without the right framework or expertise — can, of course, turn bewildering. A more manageable approach, says Karl Weick, a professor at the University of Michigan Business School and the author of Managing the Unexpected (Jossey-Bass, 2001), is to think backward from a potential outcome, which will surface the events that could create it. Weick suggests using the future perfect tense (“By next quarter, we will have lost our biggest customer”) as a simple but disciplined way of imagining what could happen. “It anchors you in the future,” he says.

That exercise is exactly what Clarke urged Rice, with eerie prescience, to do in his September 4 missive. “Decision makers should imagine themselves on a future day when the [Counterterrorism Security Group] has not succeeded in stopping al Qida attacks and hundreds of Americans lay dead in several countries, including the U.S.,” Clarke wrote. “What would those decision makers wish that they had done earlier?”



Disturbing the Perfect Place

The National Aeronautics and Space Administration (NASA) spent the 1960s, quite literally, shooting for the moon. The seemingly impossible successes it achieved during the Apollo era made it a symbol of human accomplishment, establishing a remarkable “can do” culture. But even as the mission of NASA changed, and its goals shifted from man-on-the-moon triumphs to routine shuttle operations, the early glories held fast. NASA had become a “perfect place,” wrote Yale professor Garry Brewer back in 1989. In such cultures, he wrote, the ability to listen to dissent requires “the shock of heavy cannon.”

Somehow, even the Challenger disaster of 1986 was not heavy enough. Then, early on the morning of February 1, 2003, the Columbia shuttle exploded over the piney woods of East Texas. The physical cause for the accident may have been a piece of foam debris that struck the shuttle’s left wing just seconds after launch, but that wasn’t the only problem. “In our view,” writes the Columbia Accident Investigation Board (CAIB) in its report, “the NASA organizational culture had as much to do with this accident as the foam.”

NASA, in a nutshell, remained conditioned by its past success. Even after Challenger, the CAIB authors write, NASA suffered from the symptoms of the perfect place. Its decision making was still marked by unwarranted optimism and overconfidence. NASA was still a place where lessons-learned programs were voluntary, where frontline engineers feared ridicule for expressing their concerns, where, writes the CAIB, “the intellectual curiosity and skepticism that a solid safety culture requires was almost entirely absent.”


How do we eliminate perfect-place arrogance in our own organizations? First, don’t be straitjacketed by traditional perspectives. After the foam strike was discovered, engineers called it almost an “in family” event. This meant it was treated in the same way as well-known, traditionally “accepted” risks and therefore was wrongly written off as posing no harm. Although top shuttle management quickly dismissed the threat, lower-level engineers were concerned and asked for better photos in order to more accurately assess the damage. Though they tried three different bureaucratic channels, all of their requests were denied.

“Take your labels lightly, don’t hold them dogmatically,” says Karl Weick, who has studied high-risk organizations extensively. In addition to the in-family label, Weick notes, NASA had long thought of the shuttle as being “operational” when it had really never left the experimental phase. “Once you attach that kind of label to it, you seal yourself off from any likelihood that you’re going to learn anything.”

NASA’s perfect-place culture also led to a warped outlook on safety. After the engineers’ requests for photos were denied because there was no “requirement” for them, they found themselves “in the unusual position of having to prove that the situation was unsafe,” write the CAIB authors, “a reversal of the usual requirement that a situation is safe.” This may sound like mere semantics, but it meant NASA exhibited an overconfident, prove-it-wrong attitude rather than one that demanded engineers prove it right.


To help break down such attitudes, Weick suggests a similar semantic reversal. By restating a close call as a near hit, you turn the event on its head. You almost failed, rather than barely succeeded. It’s simple, but it can be a great reminder that the system is all too capable of big mistakes. “In general, it just breeds a kind of wariness, a kind of attentiveness,” says Weick. “Complacence is what you’re worried about.”

So does all this mean we don’t shoot for the moon? That we dwell on our failures rather than taking pride in our triumphs? Not at all. But there’s a fine line to walk between a proud culture and a prideful one, between celebrating a healthy history of success and resting on your laurels. “I call it delusions of a dream company,” says Sydney Finkelstein, a professor of management at Dartmouth’s Tuck School of Business and the author of Why Smart Executives Fail (Portfolio, 2003). “It creeps up on you. That honest pride starts going toward self-confidence, overconfidence, complacency, and arrogance. It’s just a natural progression.”



Dissolving Environments of Separation

At its core, the newsroom of The New York Times traffics in information. Its products are stories, its suppliers are diverse voices, and its mission is to ferret out the truth. So it’s hard to miss the irony spelled out by the authors of the report investigating the Blair scandal. “A failure to communicate — to tell other editors what some people in the newsroom knew — emerges as the single most consistent cause, after Jayson Blair’s own behavior, of this catastrophe.” The newsroom of The New York Times, the report’s authors write, was “an environment of separation.”

Add to this environment the imperious, hard-driving leadership of executive editor Howell Raines, a self-declared “change agent” bent on outdoing other papers, and you have a recipe for communication disaster. Raines’s reputation for playing favorites and pushing his ideas of what should be in the paper only worsened the situation. “There was no sense that he was on [the newsroom’s] side, that he respected them, that he listened to them,” says USC business school professor Warren Bennis. “There’s a marvelous Middle Eastern phrase about leaders who’ve stopped listening. They say, ‘He has tired ears.’ That’s arrogance.”

Under Raines’s leadership, communication also deteriorated in an increasingly centralized hierarchy. Department heads had traditionally been crucial communication links; they were key to information flow between frontline editors and masthead-level editors, or top management. But as much of their power shifted upward, these key links in the communication chain suffered a loss of authority. With that, surely, went a decline in communication.


Yet the report’s recommendations make clear that the Times‘ problems were bigger than just Raines. Rather, they plead for strengthening the social network at the Times in order to share information more effectively. “Too much information about matters large and small,” the report laments, “is locked in too few brains.” The report’s authors, many of them reporters and editors themselves, called for meetings that were more inclusive of all levels of the organization, intranet tools for seeking out resident experts, and office hours for management, similar to those of university professors, in order to open communication pathways. Members requested more informal brainstorming among reporters and editors in different departments, more cross-departmental meetings, and temporary assignments on different desks.

These seem like simple things, but they’re actually tough to pull off. “I think people often underestimate the amount of resourcefulness that’s required to proactively shape new communication patterns,” says Niko Canner, a cofounder of management-consulting firm Katzenbach Partners. “Do people understand the way to communicate information so that they will get value from an exchange with a conversational partner they’re not used to talking with? Can they build those new behaviors into the day-to-day of how they do their work?”

Valdis Krebs, a developer of software that maps social networks, agrees. “If there’s not a network connecting two departments, then one can bring the best data in the world to the other and it won’t be trusted.” Krebs uses his software to help clients map out who knows whom within an organization — he calls the maps “organizational X-rays” — and then does something decidedly less high-tech. He introduces people on the borders of the networks, creating opportunities for them to work together. Over time, “through day-to-day work, we learn to trust each other,” he says. “And when you get upset, I learn to trust that, too.”


It’s an intriguing, potentially important, exercise. But in real life, at times of real failure, those social networks won’t work unless leaders let them. When people bypass the organization’s hierarchy, the manager’s instinctual response can be to thwart them, even punish them for breaking the chain of command.

In the face of failure, however, leaders must embrace the unconventional. They have to allow communications at the periphery — and have the discipline and humility to listen for notes that sound off-key. They must shape cultures that are open both to the possibility of failure and to the need to learn when problems do occur. And they, like all of us, have to imagine the unimaginable. And they should start now. Because, as Richard Clarke warned in his 2001 memo, “That future day could happen at any time.”

Jena McGregor is Fast Company‘s associate editor.