advertisement
advertisement

AI is moving too fast, and that’s a good thing

This rapid rate of innovation gives us all the chance to gut-check what we really want out of this technology—while we still have time to affect its course.

AI is moving too fast, and that’s a good thing
[Illustration: Harry Campbell]

2019 was a great year for seeing what AI could do. Waymo deployed self-driving taxis to actual paying customers in Arizona. Bots from OpenAI and DeepMind beat the top professionals in two major esports games. A deep-learning algorithm performed as well as doctors—and sometimes better—at spotting lung cancer tumors in medical imaging.

advertisement
advertisement

But as for what AI should do, 2019 was terrible. Amazon’s facial recognition software? Racist, according to MIT researchers, who reported that the tech giant’s algorithms misidentify nearly a third of dark-skinned women’s faces (while demonstrating near-perfect accuracy for light-skinned men’s). Emotion detection, used by companies such as WeSee and HireVue to perform threat assessments and screen job applicants? Hogwash, says the Association for Psychological Science. Even the wonky field of natural language processing took a hit, when a state-of-the-art system called GPT-2—capable of generating hundreds of words of convincing text after only a few phrases of prompting—was deemed too risky to release by its own creators, OpenAI, which feared it could be used “maliciously” to propagate fake news, hate speech, or worse.

2019, in other words, was the year that two things became unavoidably clear about the rocket ship of innovation called artificial intelligence. One: It’s accelerating faster than most of us expected. Two: It’s got some serious screws loose.

That’s a scary realization to have, given that we’re collectively strapped into this rocket instead of watching it from a safe distance. But AI’s anxiety-inducing progress has an upside: For perhaps the first time, the unintended consequences of a disruptive technology are visible in the moment, instead of years or even decades later. And that means that while we may be moving too quickly for comfort, we can actually grab the throttle—and steer.

[Illustration: Harry Campbell]
It’s easy to forget that before 2012, the technology we now call AI—deep learning with artificial neural networks—for all practical purposes didn’t exist. The concept of using layers of digital connections (organized in a crude approximation of biological brain tissue) to learn pattern-recognition tasks was decades old, but largely stuck in an academic rut. Then, in September 2012, a neural network designed by students of University of Toronto professor and future “godfather of deep learning” Geoffrey Hinton unexpectedly smashed records on a highly regarded computer-vision challenge called ImageNet. The test asks software to correctly identify the content of millions of images: say, a picture of a parrot, or a guitar. The students’ neural net made half as many errors as the runner-up.

Suddenly, deep learning “worked.” Within five years, Google and Microsoft had hired scores of deep-learning experts and were dubbing themselves “AI first” companies. And it wasn’t just Big Tech: A 2018 global survey of more than 2,000 companies by consulting firm McKinsey found that more than three-quarters of them had already incorporated AI or were undergoing pilot programs to do so.

It took modern smartphones 10 years to “eat the world,” as Andreessen Horowitz analyst Benedict Evans famously put it; the web, about 20. But in just five years, AI has gone from laboratory curiosity to economic force—contributing an estimated $2 trillion to global GDP in 2017 and 2018, according to accounting firm PricewaterhouseCoopers. “We’re certainly working on a compressed time cycle when it comes to the speed of AI evolution,” says R. David Edelman, a former technology adviser to President Barack Obama who now leads AI policy research at MIT.

advertisement

This unprecedented pace has helped drive both the advances and the agita around artificial intelligence. Between 54% and 75% of the general public, according to a 2019 survey of Americans by the global marketing consultancy Edelman in collaboration with the World Economic Forum, believes that AI will hurt the poor and benefit the wealthy, increase social isolation, and lead to a “loss of human intellectual capabilities.” A third of respondents even think that deepfakes—creepily convincing phony videos of celebrities, government officials, or everyday people, generated by deep-learning networks—could contribute to “an information war that, in turn, might lead to a shooting war.”

So how should society respond, without resorting to what MIT’s Edelman (no relation to the consultancy) calls “latter-day Luddism”? After all, a smash-the-looms approach didn’t work for the actual Luddites, who tried it during the Industrial Revolution. But the opposite—a blind faith that innovation will eventually work its own kinks out—won’t do either. (Exhibit A: the entire planet, after 100 years of carbon-spewing vehicles.)

This is where the rocket-ship throttle comes into play. The standard timeline of technological innovation follows what’s known as an “S curve”: a slow start, followed by a rising slope as the tech catches on, and ending with a leveling off as it becomes ubiquitous. With previous world-eating technologies such as the automobile or the smartphone, unintended consequences didn’t become urgently visible until we were well up the slope, or even on the plateau, of the S curve. Mass adoption of cars, for example, set off several slow-motion catastrophes: not just climate change, but also a weakening of public transportation, decades’ worth of distorted urban planning, and incredible carnage. And by the time psychologist Jean Twenge asserted, in 2017, that smartphones may have “destroy[ed] a generation” by encouraging social media addiction and anxiety, nearly 3 billion people were already using them. In both cases, the concrete had set before we realized anything was wrong. But by going from “nowhere” to “seemingly everywhere” in roughly half a decade, AI is actually offering us a real-time feedback mechanism for course-correcting.

Can we engineer neural networks to be more inter­pret­able and less like mysterious “black boxes”? How do we systematically test deep-learning systems for unethical biases? Should an equivalent of the FDA or EPA be established for vetting AI practices and products? “The fact that it’s changing rapidly [is] bringing a certain urgency to the asking of these questions,” says Nick Obradovich, a scientist at the Max Planck Institute for Human Development who studies the social challenges brought on by AI.

This real-time reckoning with AI is already underway. At its 2019 I/O conference, Google unveiled a system called TCAV that acts as a kind of “bullshit detector” for deep-learning networks. (Want to make sure your cancer-screening AI is really spotting tumors, not statistical glitches? TCAV can help!) In May, 42 countries, including the U.S., formally adopted policy guidelines developed by the Organization for Economic Co-operation and Development (OECD), “agreeing to uphold international standards that aim to ensure AI systems are designed to be robust, safe, fair, and trustworthy.” Earlier this year, MIT researchers created a whole new scientific discipline—”machine behavior”—designed to investigate how algorithms and humans interact “in the wild.” The paper was coauthored by two dozen experts from fields as diverse as economics, political science, robotics, and sociology.

“At some point in the future we will have some regulation of the development and use of algorithms in our lives,” says Obradovich, who is one of the machine-behavior coauthors. In the meantime, “there’s a very big role for well-trained social scientists to [play by stepping] in and starting to uncover problems that are occurring, and might occur, right around the corner.”

advertisement

This emerging attitude toward innovative tech could spur us to make AI not just “work,” but behave. It could establish how we respond to other new disruptive technologies, too—such as 5G internet, cryptocurrency, self-driving cars, and CRISPR gene editing. “This ideal—of codesigning systems and [social] policy—can be a repeatable formula for new innovations,” says Edelman. “In fact, it probably has to [be].” After all, AI won’t be the last rocket we’ll find ourselves strapped to. And no one’s got “the right stuff” to pilot it except us.

Quantum Speed

AI has quickly gone from laboratory curiosity to economic force.

$2 trillion: AI’s estimated contribution to global GDP in 2017 and 2018, according to accounting firm PricewaterhouseCoopers

75%: A 2018 global survey of more than 2,000 companies by consulting firm McKinsey found that more than three-quarters of them had already incorporated AI or were undergoing pilot programs to do so.

2019: The year that Google computers reached quantum supremacy, meaning that they could solve problems that regular supercomputers could not, according to the company

A version of this article appeared in the Winter 2019/2020 issue of Fast Company magazine.

advertisement
advertisement

About the author

John Pavlus is a writer and filmmaker focusing on science, tech, and design topics. His writing has appeared in Wired, New York, Scientific American, Technology Review, BBC Future, and other outlets

More