Fast company logo
|
advertisement

In this exclusive excerpt from her new book, “Trampled by Unicorns,” tech veteran Maëlle Gavet outlines the need for decisive leadership to prevent social media platforms from destroying society.

Tech giants need to rediscover their sense of morality and empathy—before it’s too late

[Source images: chuttersnap/Unsplash; Tim Mossholder/Unsplash; Alexander Krivitskiy/Unsplash; YIFEI CHEN/Unsplash; oleksii arseniuk/iStock]

BY Maelle Gavetlong read

I came to the tech industry accidentally. 

Almost two decades ago, not long after completing a Bachelor’s degree in Russian language and literature at the Sorbonne in Paris, I enrolled at the IEP Paris, better known as Sciences Po, which turned out to be a gateway to a new world. In addition to an immersion in the humanities, it exposed me to sociology, political science, macro- and microeconomics, and so many other things. It was in many ways a map of the world.

Where did that map lead me? In an unexpected direction toward technology startups—building the “Amazon of Russia” with Ozon.ru, then to an online travel agency and restaurant reservation system with Booking.com, Kayak, and OpenTable, and from there to Compass, a pioneering real estate technology platform. Over the years, some skeptics in the tech world–and elsewhere–have argued that the only “practical” use for my humanities background was as preparation for a lifetime of late night existential conversations in Parisian cafés. I emphatically disagree. If 15 years in tech has taught me anything, it’s this: The more a company relies on tech, the more it needs people who are curious about the world around them. People who understand how others will feel and react to a set of circumstances.

Over the years I’ve come to the conclusion that we tech leaders all too often overvalue analytical, technical, and IQ-based skills rather than the social, EQ variety. We tend to ignore what history has taught us, look down on “soft skills” and subjects like philosophy, sociology, and literature because of their lack of a solution-oriented approach—in our eyes, at least—and yes, sometimes chase money over humanity’s advancement. We often accept the idea that damage to human lives caused by our innovations is a price to pay for progress, rather than think long and hard about how to steer clear of these negative effects in the first place. That unwavering faith in technology, that blindness toward human cost in the name of a “vision,” and that greed has led many of our companies to build technology that exploits our weaknesses and makes humanity subservient to tech—which is the opposite of what most of us intended. 

No one expects innovators to be perfect. When you’re doing something new, mistakes, major ones even, are inevitable. But too often the leading tech companies have concealed their errors and stopped asking questions that were too difficult or uncomfortable to answer.

“You’re Arguing About Whether the Baby’s Dead”

Back in 2016, I witnessed the Valley’s ethical contortions first-hand. In a private conversation with two (male) social media executives, I described the harassment and bullying of women in my own—real life—social circle on Twitter, Facebook, and other platforms. These friends, I told them, were starting to avoid social media. I was met with a well-worn response: “Oh, that sucks, but . . .” followed by protestations of “. . . it’s not our job to filter content,” and “. . . we’re just a platform, not a publisher,” and “. . . hate speech, proportionately, is such a small part of our output,” and so on.

I countered that, even setting aside the argument for showing decisive moral leadership, going all-out to solve this problem makes clear business sense: If my friends are feeling this way, it’s highly likely that many other women (and men) around the world are too. And if tens of millions of users tire of the relentless toxicity, they will eventually quit these platforms. The answer, I argued, was and remains, at least in significant part, ramped up human moderation.

Their response? “Artificial intelligence will solve it for us.” Yes, I agreed, AI will be part of the solution. But let’s not kid ourselves here. It won’t be enough on its own—we’re nowhere near the point, even today—and in the meantime this issue will only fester.

Finally they conceded that human moderators would indeed play a role, as they already do—and they would be hiring more. Yet the numbers they had in mind at the time—certainly fewer than 10,000—would be wholly inadequate. I told them they were going to need many times that for a user-base of their size to tackle a crisis on this scale—in a variety of languages to boot.

“Yeah,” came the reply, “that’s probably not going to fly.”

Two years later, a strategic about face had taken place. Facebook, for example, had plainly concluded that the tsunami of vitriol, violence, and fake news on its platform could no longer be kept in check by a relatively small number of moderators and machine learning alone. By the end of 2018, the company had expanded its “safety and security” team to around 30,000 people—half of whom are “content reviewers.” A mix of full-time employees, contractors, and third-party companies, between them they cover every time zone and about 50 languages.

Unsurprisingly, however, policing a deluge of extreme content that ranges from fake information to violent, hard-core pornography has taken a severe toll on moderators—who are nonetheless destined to play an important role in fixing this mess. These individuals have to view many hundreds of videos and images, often plumbing the depths of human depravity, each shift. 

Chris Gray, who worked for CPL, a Dublin-based outsourced company which moderates content for Facebook, knows first-hand the toll that content moderation can take. When he worked for the contractor between 2017 and 2018, he began his training by reviewing images and video that users had reported for being pornographic. It would prove to be a relatively gentle introduction. His role then shifted into moderating what he terms “the high-priority” content, for the U.K. and Ireland, including bullying, threats of violence, hate speech, and graphic violence—”just all the nasty stuff, and all the worst you can imagine of the world,” Gray recalls. After that he was viewing Islamic State executions, beatings, murder, torture, a woman stoned to death, child exploitation, the torture of animals, including video footage of dogs being cooked alive. He was later diagnosed with PTSD and is now suing Facebook in Ireland.

But the material he was viewing wasn’t the only problem. Deciding whether or not to take down a particular video or image was also stressful—particularly as moderators are constantly audited by supervisors. “Someone is double-checking what you’re doing, they’re taking a representative sample of your work and you have to reach this 98 percent quality score,” he explains. “And you never do at the first round, you always get these mistakes and you have to appeal your mistakes. After a while your big preoccupation is arguing the point that you’re right.”

He mentions the time he was viewing images of a massacre of Rohingya Muslims in Myanmar. “There was an image of a baby with somebody’s foot on its chest. I had decided that it was a dead baby because it wasn’t fighting back, and its eyes were closed. And my auditor had responded, “There’s nothing to indicate that the baby is dead, so you’ve made the wrong decision.” Can you imagine that you’re arguing about whether the baby’s dead because you don’t want to get your quality score down? All you care about is getting the point back, not whether the baby is dead.”

“I’m Going to Show You More Car Crashes”

Technology’s ability to overpower and overwhelm humanity is no longer in question. And yet, according to Aza Raskin, co-founder of the Center for Human Technology and former creative lead at Mozilla, as a society we have been caught off guard. Why? Because it turns out we focused on the wrong thing all along. “Technologists and culture has fixated on the point when technology takes control by overwhelming human strengths—the so-called “singularity”—but has entirely missed the point where technology takes control by exploiting human weakness. That’s the true singularity and perhaps we are already there.”

Tech wasn’t the first industry to figure out that attention could be monetized and that “time on site” (literally the amount of time visitors spend on your website or app) was the metric that mattered most. Just look at the way casinos deploy their slot machines. “Casinos create special zoned spaces to play in, fine-tuned to keep you in a hypnotic state of mind, the ceiling height and lights are just so, the wins and the losses are both accompanied with similar hushed happy melodies and are calculatingly dosed at just the right rate to keep you there. It’s not a fair fight. The casinos have A/B tested their way to a powerful human-nature trap,” Raskin says. “And Silicon Valley companies have done the exact same thing. Do you recognize that hypnotic, auto-pilot state from using your phone? We all do. The soft animal underbelly of our minds is increasingly vulnerable to increasingly powerful technology.”

We first experienced this as “information overload,” when the human ability to process information, our natural curiosity, was surpassed, Raskin tells me. Next came “tech addiction,” where we’ve lost the ability to self-regulate because technology is overwhelming our vulnerabilities. The next phase was the polarization caused by what he describes as “the hacking of moral outrage.”

“It’s not that [the platforms] are showing people what they want, they’re showing people what they can’t help but look at,” he says. “If you drive past a car crash, you can’t help but look at the car crash because it’s surprising; and there’s a little AI now that’s watching you and [concluding] ‘Oh, I guess you like car crashes, that’s your true revealed preference. I’m going to show you more car crashes.’ And then you’re living in a world of car crashes.”

advertisement

The platforms have built a giant lever for changing beliefs, behaviors, and attitudes and hooked it up to their supercomputers’ “voodoo doll” model of every individual user, Raskin explains. “It starts as a little bit generic and then our meta data is collected—our hair, our toenail clippings, our click trails. . . . [Today] the models about us are getting so good that [they] can predict things that we think about.” 

“Imagine playing chess against someone who could predict all of your moves before you made them,” Raskin says. “You would be irrevocably dominated. As technology is increasingly able to predict our behaviors and beliefs—and what will change them—will our agency and ‘free will’ also be irrevocably dominated?” 

Democracy Under Attack

During the 2016 U.S. election, it’s estimated that a precision-targeted Russian disinformation campaign on Facebook, to churn the electorate and largely support the election of Donald Trump, reached some 126 million users through Facebook alone. Additionally, according to The New York Times, Russian agents published more than 131,000 messages on Twitter and more than 1,000 videos on YouTube. Amid the choreographed hand-wringing and anguish that followed, Facebook’s Mark Zuckerberg—who had in 2016 dismissed the notion that voters were manipulated in Trump’s favor on his platform as “a crazy idea”—solemnly vowed to fix this unprecedented electoral interference, one of the most complex problems modern Western democracy has ever faced.

In a 2018 blog post, Facebook’s CEO declared: “One of the important lessons I’ve learned is that when you build services that connect billions of people across countries and cultures, you’re going to see all of the good humanity is capable of, and you’re also going to see people try to abuse those services in every way possible. As we evolve, our adversaries are evolving too. We will all need to continue improving and working together to stay ahead and protect our democracy.” 

Trampeled by Unicorns: Big Tech’s Empathy Problem and How to Fix It by Maëlle Gavet

Two years later, all the evidence shows that so far he has failed. Consider the European elections held in May 2019. In the run-up to that election, the online activist group Avaaz launched a Europe-wide investigation of how far right networks were sowing deception on Facebook. After running a fundraising campaign among the group’s more than 60 million members globally, the organization set up a team of 30 people, including journalists, data analysts, and researchers, across Europe. 

The research team was shocked by what it discovered: Well-organized far right and anti-EU groups were weaponizing social media on an industrial scale to spread hateful content and disinformation, which garnered an estimated 762 million views over the three months in advance of the election. Overall, Avaaz reported almost 700 “suspect pages and groups” to Facebook, which were followed by a total of about 35 million people and generated more than 76 million “interactions” (comments, likes, and shares) over a three-month period prior to the election. Facebook removed 132 of the pages and groups, accounting for almost 30 percent of all interactions across the networks in question. 

The way these groups were able to manipulate Facebook’s safeguards, particularly in light of Zuckerberg’s 2018 assurances that the platform had “developed sophisticated systems that combine technology and people to prevent election interference on our services,” should ring alarm bells. In Italy, for example, Avaaz’s investigation prompted Facebook to take down 24 pages with more than 3.9 million followers. The offending pages, many of which were supportive of Matteo Salvini’s hardline League and the populist 5 Star Movement, peddled false information and “divisive anti-migrant content.” Avaaz reckoned that, in Italy alone, it had identified 14 main networks in apparent breach of Facebook’s guidelines, including 104 pages and eight groups with a total of 18.26 million followers—enough, perhaps, to have an impact on a closely fought campaign in a divided country. Similar cases were discovered in Germany, Spain, and Poland.

While Facebook apparently “welcomed” Avaaz’s efforts, it was also clear that the company was not doing enough to sufficiently combat a problem on this scale and with such far-reaching implications for democracy. “They were basically outsourcing to a crowd-funded organization like [Avaaz] to do the work they were supposed to be doing,” says Rebello Arduini, who led the operation for Avaaz. “It seemed they were relying on 30 people from Avaaz, and other small anti-disinformation organizations, to do what 30,000 people [i.e. Facebook’s mix of full-time employee and third-party moderators] didn’t.” 

It gets even worse for Facebook. A 2019 report by the Oxford Internet Institute detailed an array of abuses by government agencies and political parties on Facebook. The report found that organized manipulation via social media platforms had more than doubled since 2017 (although this is partly due to researchers getting better at detecting it). A staggering 70 countries—including politicians and political parties from 45 democracies—have deployed so-called “computational propaganda,” which the authors describe as “including the use of ‘political bots’ to amplify hate speech or other forms of manipulated content, the illegal harvesting of data or micro-targeting, or deploying an army of ‘trolls’ to bully or harass political dissidents or journalists online,” all to covertly mold public opinion.

The report also found that in 26 authoritarian states, governments have used computational propaganda as a tool of information control, while a small number of sophisticated state actors use the same techniques for “foreign influence operations, primarily over Facebook and Twitter.” The platforms themselves attributed this “cyber troop activity” to just seven countries, mostly the usual suspects: China, India, Iran, Pakistan, Russia, Saudi Arabia, and Venezuela. “It’s not just a lone wolf hacker or a kind of collective of people who are coming together and trying to use the tools of computational propaganda for their own purposes,” Bradshaw explains. “There’s actually some kind of organized coordinated activity behind what you see on the content layer.”

While Bradshaw concedes that Facebook and other platforms have introduced a number of policy changes over the past three years, so far they are failing to meaningfully protect democracies. “Most of the changes have had to do with adjusting algorithms and affecting ways that information is prioritised, maybe downgrading stuff that has been fact-checked as false and promoting content that has been fact-checked as true by third-party fact-checkers.”

And yet, a whole Presidential election cycle later, the platforms are still plagued with electoral interference. In March 2020, The New York Times revealed a number of examples of how the nature of the threat facing tech giants like Facebook, Google, and Twitter has evolved. “Russia and other foreign governments once conducted online influence operations in plain sight, buying Facebook ads in rubles and tweeting in broken English, but they are now using more sophisticated tactics such as bots that are nearly impossible to distinguish from hyperpartisan Americans,” the newspaper reported. 

As the race for the White House intensifies and the stakes grow ever higher, Facebook in particular is rarely out of the headlines. Amid recent claims by a whistleblower that Facebook “ignored or was slow to act on evidence that fake accounts on its platform have been undermining elections and political affairs around the world,” according to a memo obtained by Buzzfeed, the platform announced a number of proactive measures in the run-up to November 3rd. It will not accept any new political ads in the week prior to the election, and Nick Clegg, Facebook’s head of global affairs, told the Financial Times that the platform will “restrict the circulation of content” should the election descend into civil unrest. It will also finally launch its long-awaited oversight board, which will be the final arbiter over content moderation decisions. 

While I have no doubt that this is too little and too late when it comes to the upcoming U.S. election, I still tend towards cautious optimism. If the long-running problems caused by Big Tech are to be addressed, we can’t only rely on government interventions—such as over antitrust, labor law reform to fix the gig economy, and fair payment of tax. The tech giants themselves must finally recognize that warm words and platitudes are no longer enough. Facebook’s belated actions over disinformation and increasing pressure from the public opinion are a start. And they are a glimpse of what was possible all along, if only the will, ambition, and empathy had been there.

Excerpted with permission of the publisher, Wiley, from Trampled by Unicorns: Big Tech’s Empathy Problem, & How to Fix It by Maëlle Gavet. Copyright ©2021 by Maëlle Gavet. All rights reserved. 

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

WorkSmarter Newsletter logo
Work Smarter, not harder. Get our editors' tips and stories delivered weekly.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

Explore Topics