In the nexus of business and technology, there’s no hotter topic than the transformative potential of machine learning and artificial intelligence, says Susan Athey, the Economics of Technology Professor at Stanford Graduate School of Business.
“You read in the newspaper about how these sentient robots are going to take over the world,” she says. But go to an academic conference and you’ll find technical experts mostly exploring things like how well computers can distinguish between pictures of cats and dogs.
While popular misconceptions of the cutting-edge science of artificial intelligence are to be expected, Athey says there are also plenty of misguided attempts to implement it in practice.
In a talk about the digital transformation of business titled “Accelerate the Next Stage of Your Success,” Athey warned that decision makers can be–and often are, in her experience–led astray by misunderstanding what machine learning models can and cannot do. She also outlined some of the risks that come from losing sight of fundamentals in favor of shiny new technologies.
“I’ve seen companies blow hundreds of millions of dollars by letting engineers make decisions without the input of business people,” she says. “Machine learning solves simple problems, but it is not sentient. And it struggles when applied to many business problems.”
Where machine learning models excel, Athey says, is in making predictions where performance can be quickly and accurately evaluated, though perhaps at some cost, like assessing what’s a cat and what’s a dog. “Deep neural nets are very good at finding patterns in a stable environment,” she says. “They add value in environments where each unit is described by a large set of features, like all the pixels in an image, text in a review, or a user’s history of interactions with a website, but where it is easy and quick to check whether the algorithm is doing a good job.”
Image classification is a prime example of this type of simple problem, Athey says, where computers analyze pixels to find underlying patterns. Here, recent innovations mean that algorithms can often distinguish between cats and dogs with more than 90% accuracy.
Thinking Massively vs. Thinking Long
While machine learning models are good at solving simple problems at massive scale, it is harder to use them when it takes a long time to measure success. For example, if an algorithm is used to give 10-year loans or to make long-term investments, it is hard to learn quickly whether it works or not, and it is hard to iteratively improve the algorithm’s performance. If the environment changes in a way that invalidates the model’s predictions, it might take a long time to figure that out.
Most machine learning algorithms are also bad at thinking about what Athey calls “what-if” scenarios. Like what would happen if a company were to change its prices, or if it hadn’t run a certain ad campaign. And here is where misguided faith in the accuracy of machine learning can become problematic in practice.
Consider an algorithm designed to predict hotel-room occupancy based on observed prices, Athey says. It would look at historical occupancy rates and prices and draw the correct conclusion that the hotel is full when prices are high. However, if that predictive model was applied to optimize prices, it would lead to the conclusion that in order to get more people into your hotel, you should raise prices. “Which is of course wrong,” Athey says. “Just because higher prices are correlated with a full hotel doesn’t mean if you change your price you will sell more hotel rooms.”
Of course, any pricing specialist employed by a hotel would know that demand curves don’t work like that. The problem comes when decisions start flowing from a conviction that machine learning models solve every problem when applied in an off-the-shelf fashion. Instead, Athey says, leaders need to determine which specific parts of a problem can be outsourced to algorithms and which parts should be guided by old-fashioned know-how. Making that distinction can be a real challenge, since most people don’t really understand what’s going on under the hood of these black-box algorithms.
“If business people aren’t educated and confident, they get crowded out of the room, because it’s hard for them to argue with the technical folks who build the algorithms,” she says.
“The more educated we are and the more confident we are about the basic ideas, the easier it is to say, ‘I don’t care about your mumbo-jumbo, let’s just talk basic principles here. What you’re saying doesn’t make sense. These are some basic [economic] principles, and if your model tells us something different, there’s something wrong with the model.'”
Bridge the Gap Between Algorithms and Expertise
Of course, none of this dampens Athey’s excitement about the impact that machine learning and artificial intelligence are already having. Indeed, Athey’s current research, much of it joint with fellow Stanford GSB professors Guido Imbens and Stefan Wager, aims to build tailored machine learning models that focus on what-if questions rather than pure prediction, and she is encouraged by their early success. Even with methods customized for what-if questions, however, business knowledge is required to ensure the correct application. Thus, she wants to make sure that executives approach the digital transformation of industries and business models holistically.
“We need business leaders for this transformation, because the technical experts don’t always see the big picture,” she says. “They don’t see that the metrics they’re optimizing for don’t factor in all of the business considerations.”
Athey also sees great opportunity for a new generation of leaders to step in and bridge the gaps between algorithmic insights and human expertise: “As this rolls through the whole economy, every company is going to go through the same journey. And they’re all going to need people to figure out new business system plans and invent new ways of thinking.”
This article was originally published on Stanford Business and is republished here with permission.