FiveThirtyEight's Nate Silver Explains Why We Suck At Predictions (And How To Improve)

People see patterns in all sorts of things—but they're rarely right about what they mean. In "The Signal and the Noise," Nate Silver investigates why—and talks with Fast Company about how poker can make you better at business.

"Wunderkind" and "prodigy" are among the superlatives often attached to Nate Silver. After pioneering a system for baseball prediction, he leapt into the public eye (and became a bit of a nerd icon) by calling 49 of 50 states in the 2008 presidential election. And though he became one of Time magazine's 100 Most Influential People and landed a New York Times gig for his prognostication prowess, Silver takes a dim view on prediction.

"We need to stop and admit it: we have a prediction problem," he writes in the introduction to his new book, The Signal and the Noise. "We love to predict things—and we aren't very good at it."

The diagnosis comes from a collective failure to foresee epochal events—say the September 11 attacks or the 2008 financial crisis—and a political culture rife with constantly forecasting (and consistently wrong) experts. The solution, he says, requires a change in attitude, one that emphasizes probability.

Being that prediction is (probably) essential to business, Fast Company wanted to get a bit better at it. We talked with Silver about informational humility, how to get past ad hoc decision-making, and why if you want to get better at working with predictions, you should really start playing more poker.

FAST COMPANY: If we're in the era of Big Data, how come it's so hard to make decisions?


Nate Silver


NATE SILVER: When human judgment and big data intersect there are some funny things that happen. On the one hand, we get access to more and more information that ought to help us make better decisions. On the other hand, the more information you have, the more selective you can be in which information you pick out to tell the narrative that might not be the true or accurate, or the one that helps your business, but the one that makes you feel good or that your friends agree with.

We see this in polls. After the conventions we've gone from having three polls a day to like 20. When you have 20, people get a lot angrier about things, because out of 20 polls you can find the three best Obama polls, or the three best Romney polls in any given day, and construct a narrative from that. But you're really just kind of looking at the outlier when you should be looking at what the consensus of the information says.

The book is about how we process information for the purpose of making better decisions. It calls on us to take a little bit more humility about how...all the biases we have in any decision we make stick with us when we're looking through information, and can be potentially worsened by it.

So if you were a decision maker, how would you recommend maximizing the information that's available to you?

I think what you want to do is make sure you're willing to engage with a lot of information, but at the same time make sure you're treating it with a fair amount of skepticism.

Look, for example, at how computers play chess: Deep Blue, the IBM computer that beat Gary Kasparov. In any given chess position, there are on average 40 legal moves that you can make, and Deep Blue explores all 40 of them to some extent, where some human player might fixate on one or two and look at those. And usually that's fine. But sometimes you can miss a big threat that an opponent has, or you can miss an opportunity to something that is "unorthodox" but is actually a better play.

So, if you have a business where you're flexible enough to try out different things, then your threshold for that should be low. Instead of trying to find the perfect pitch, find ways to let your customers give you feedback and decide for you and things will develop more organically that way—instead of trying to look for the magic bullet that looks perfect in the statistical model or the PowerPoint but winds up being the New Coke or something. You put all your eggs in one basket and lo and behold it doesn't work, and you don't have a lot of ways out.

At the same time, I think people also don't realize how many shortcuts we take when we want a quick and dirty answer. How approximate those approximations really are. For me, it helps to have played cards and to have looked at sports and looked at politics and get some experience for how to do that. But if you go in expecting to have the perfect idea often you'll find you'll have none at all.

In the book you talk about having a probabilistic attitude toward data and decision-making. What are the lessons to be learned there?

I think it's going to have to do with this idea of having a more diverse area of strategies, and it also means recognizing when do you have enough information about something to change your mind. When you're testing out a new product and you have the opportunity to pull the plug on it at some point, how much negative feedback should you get before making that decision?

What people usually do is make those decisions on an ad hoc basis. You might have a six-month rollout plan for a new product, but if you're not getting good feedback, you pull it after two months—and maybe it needed the full six months to mature. Maybe you could be in denial about something where you have a plan and you stick to it no matter what. So it's a question of determining what are the signs and signals in setting up those rules in advance and abiding by them later on when you get different types of feedback.

And it's a little hard to generalize, because it depends on how rich the data are in different fields. In baseball, for example, it takes a long time to determine who the best players are, because there's a lot of randomness involved, whereas in tennis the number one seed of the U.S. Open is going to beat the 16 seed like 97% of the time or something like that. Recognizing the signal-to-noise ratio on what you're getting may determine what your strategy is and how confident you can be in making the decision.

How can we get a bead on what that signal-to-noise ratio is?

To the extent that you can find ways where you're making predictions, there's no substitute for testing yourself on real-world situations that you don't know the answer to in advance.

I talked at an investment fund recently. Since they know there's a lot of noise in stock-market data they actually have their employees play a lot of poker. There's also a lot of randomness and luck in poker, but at least it gets to the long run a little bit faster. So developing an intuitive sense—one that is honed and refined through experience—for what's meaningful and when you've gotten enough data to say "this represents a change in my business environment" or equally important, to say "this doesn't." In fact, it might be more important not to get freaked out about one bad month's worth of sales.

By playing games you can artificially speed up your learning curve to develop the right kind of thought processes. You can look at principles of cases that are more idealized, like poker or sports, where they're these laboratory experiments still occurring in real-world situations, and seeing what works well for people in those fields and applying those same kind of attitudes and habits and aptitudes to a business case.

What are those implications?

So, basically, every business should have an NCAA bracket pool and take it really seriously.

This interview has been condensed and edited. Follow Drake Baer and Nate Silver on Twitter.

[Image: Flickr user Louish]

Add New Comment

0 Comments