While people worry about the moment when AI will become smarter than humans and jobs will disappear, we’re already crossing another boundary in technological progress. “There’s this much earlier moment when technology actually overwhelms human weaknesses,” says Tristan Harris, cofounder of the Center for Humane Technology and a former “design ethicist” at Google. Tech designed for the attention economy–social media that is reshaping everything from elections to how we date, and is engineered to make us look at our phones dozens of times a day–exploits how our brains work. The Center wants the tech industry to embrace a new agenda to fix the problem, focused on how to redesign products so that they improve humanity instead of exploiting it.
There are so many examples of tech industry failures that it’s easy to get lost in the cacophony, but Harris, who spoke to a tech industry audience on Tuesday in San Francisco while explaining their work, argues that they are all rooted in the so-called race to the bottom of the brainstem. Desperate for clicks and views, tech platforms look for any way to bring people back by making use of human instincts. It works. The average time people spend watching YouTube on mobile, for example, is now more than 60 minutes a day, driven largely by the platform’s recommendation engine. When you hit play on a video, intending to watch only one, “it wakes up a little avatar voodoo doll version of you inside a Google server somewhere, based on all your clicks, based on all your likes,” he told the crowd at the event. “We don’t have to manipulate you because we just have to manipulate the voodoo doll. We test a bunch of videos on him and we say which one’s going to keep you the longest.”
The AI works in ways that tech companies never intended; a few years ago, Harris says, YouTube realized that the platform was recommending anorexia videos to teen girls who had watched videos about dieting. When Facebook started recommending groups, AI automatically suggested that women joining groups for new mothers join an anti-vaccine conspiracy group. Even when platforms can tackle specific issues–YouTube took steps to stop recommending flat Earth and anorexia videos–the model that the platforms use means that new problems continually emerge. Social media use is also linked to anxiety, depression, and loneliness. “We spend now about a fourth of our lives on the screen, in these artificial social systems,” Harris says. Tech may also be making people more politically polarized.
The fixes Harris proposes seek to fundamentally reshape the relationship between users and tech companies: What if, rather than “voodoo doll” AI designed to keep your attention, we had AI fiduciaries that acted in our interests? What would it mean if dating apps, for example, competed to help people actually form relationships, rather than competing on how to keep users endlessly scrolling through new matches? What if social media worked to connect us with people in real life, and to bridge political divides rather than exacerbate them? What if tech platforms were focused on solving our problems rather than keeping our attention?
To help companies explore these ideas, the Center will be launching a design kit to help companies better understand human nature as it relates to tech, hosting a conference, and trying to craft better language to talk about the problem. At the event, Harris encouraged tech workers to start talking about the issue in meetings, and for shareholders and VCs to demand more “humane tech.”
“While we’ve been upgrading the machines, we’ve been downgrading our humanity,” Harris says. “And this is existential, because our problems are going up–climate change, inequality–our ability and need to agree with each other and see the world the same way [and for] critical thought and discourse is only going up, but our capacity to do that is going down because of human downgrading. So we’re going the wrong way.”
The Center for Humane Technology is only one organization looking at the problem, with one perspective. After the talk, many in the tech industry criticized it, including the fact that it hadn’t mentioned some critical problems in technology, such as the bias built into algorithms. Some argued that the presentation lacked substantive solutions. But the argument that tech companies need to make radical and systemic change–and shift to business models that don’t rely on advertising–is something that more people should be talking about. The question, of course, is whether that change in business models can actually happen broadly. “The truth is if Silicon Valley thought there wasn’t money to be made in dopamine hits and that there was money to be made in unlocking human potential, they’d absolutely do the latter–and in fact in places where they can see revenue, that’s precisely what they do,” the technologist and writer Tom Coates tweeted after the event.
Harris argues that the current typical models, supposedly free to users, aren’t working. “We’re getting free social isolation, free downgrading of attention spans, free obstruction of our shared truth, free incivility,” he says. “Free is the most expensive business model we’ve ever created.” It’s possible that people may be more willing to pay for real alternatives than tech companies realize or believe now. And even the largest tech companies are slowly beginning to acknowledge at least some of the problems that exist. When Harris worked at Google and first started raising some of these problems several years ago, he says that he was discouraged; now, he believes that change is possible. Companies like Google and Facebook are taking early steps to find new ways to work, using the concept of “time well spent” that Harris helped promote. The relatively small size of the industry could help. Harris compares it to climate change. “It can be catastrophic,” he says. “But unlike climate change, only about a thousand people need to change what they’re doing.”