I recently asked a gathering of business leaders in New York City how much time they’d spent in a consumer’s home over the past year. Out of 3,000 executives in the audience, only two raised their hands. The unspoken reply could just as well have been: “Why should I? All I have to do is turn on my computer, and I’ll find never-ending streams of tables modeling how consumers feel about my brand.”
That’s certainly true. But isn’t it tantamount to explaining your interest in a romantic partner because they’re 5-foot-7, you’re fond of people with a hair color of Pantone 39134, and that the last four digits of their cell phone number turn you on?
Businesses have come to rely on big data to understand the emotions of their most important asset—customers. And while big data is helping companies see patterns in huge masses of information, it’s proving limited for understanding the most important aspects of customers’ needs and desires.
Not long ago, one of the major U.S. banking institutions misinterpreted an increase in “churn.” This term refers to customers who begin to move their money around, refinance their mortgages, and show other signs that they may be on the verge of leaving the bank. Faced with those signs, the bank began to prepare letters asking its customers to reconsider moving on.
Before mailing the letters, though, executives discovered something surprising. While big data had uncovered evidence of churn, it couldn’t explain the cause. The churn wasn’t because customers were dissatisfied with the bank. The real reason? It turned out that those customers were getting divorces, which explained why they were shifting around their assets.
The bank had relied on correlations generated by their most powerful algorithms, but an essential piece of the puzzle was missing: the smaller-scale relationships underlying those correlations. To understand causation, the bank execs needed to understand what I call “small data.”
To be sure, the correlation-causation fallacy is nothing new, and it will be around no matter what analytical methods we use. Big data, after all, has never promised to eradicate that problem (let alone solve all of them) in the first place, and data scientists know the limits of their tools better than anybody. The trouble, though, is that big data’s biggest wins are leading companies to become overconfident–and to overlook the crucial, counterbalancing small data right under their noses.
In 2002, Lego was close to bankruptcy. For years, the iconic toymaker (which my company has worked with) had been anxious about declining sales. The younger generation had simply moved on, preferring digital play to plastic blocks. Lego’s young customers were leaving the Lego universe far behind.
Big data had one lesson for Lego: The instant-gratification generation had arrived, and kids of the future would no longer make time for incremental physical play. So in 2003, heeding that lesson, Lego made a dramatic move. It decided to increase the size of its tiny bricks to gigantic building blocks. Where the construction of a Lego castle in the past might have taken days, now the journey was reduced to hours, if not minutes.
Surprisingly, the move had the opposite effect of the innovation intended by it. By Christmas 2003, Lego was stunned to realize a $240 million operating loss on sales of $1 billion and was sitting on some $747 million in debts, leaving the fate of the entire company in jeopardy.
It was then, in the nick of time, that a team from Lego decided to visit consumers in their homes across Europe. While visiting one family in Germany, Lego asked an 11-year-old boy what he was proudest of; he pointed out an old pair of raggedy skateboarding sneakers that he kept displayed on a shelf. He explained that the sneakers were proof that he was the best skater in town. The wear on the side of the sneaker demonstrated to his friends that he was capable of sliding down his skateboard at the perfect angle. The shoes had become his trophy.
The story was surprising, to say the least, as that seemingly insignificant consumer observation–a piece of small data that corporate researchers couldn’t have picked up by looking for patterns in big data–showing that if kids are placed in the driver’s seat, time is no longer the most essential element. Given the right motivation, they’re still willing to devote hundreds of hours to perfecting a skateboard trick or building a fantastic castle.
Soon after, Lego returned to its traditional tiny bricks and dramatically increased the number of bricks in each box. It was this period, too, when the company laid the foundation for the Lego movie. These moves ultimately proved the more innovative, helping infuse a renewed passion into kids’ play patterns. Lego quickly recovered, and today, 10 years later, it’s the largest toy manufacturer in the world.
I’m not an opponent of big data. I am, however, a huge believer in achieving the right balance between correlation and causation–and in rethinking the methods we use to do that. No matter how intelligent the analysts and data miners are, sitting in their air-conditioned offices, the hypotheses they test against enormous masses of data points are still just that–abstract hypotheses that also need testing out in the real world, and often on a small (which is to say human) scale.
Because it’s ultimately in those small data, now and forever, where the clearest evidence of who we are and what we desire resides—even if, as those Lego execs found out more than a decade ago, it’s in a pair of old sneakers with worn-down heels.