Fast company logo
|
advertisement

According to a new study, the United States might want to take some legislative cues from the European Union.

Social media’s toxic impact can last up to 8 days. This behavioral scientist’s solution might surprise you

[Source photo: mikoto.raw Photographer/Pexels]

BY Clint Rainey3 minute read

Mark Zuckerberg argued back in 2019 (an eternity ago, in internet years) that policing online content was basically impossible under the current framework for social media platforms like Facebook. Nobody will ever rid the internet of bad stuff, but the unregulated nature of these networks—which are now used to spread information on an unprecedented global scale—has made moderating particularly difficult. Not all companies behind these platforms even believe this job is a priority, but the point stressed by Zuckerberg (and others since then) is that tech’s top minds continue to struggle—on their own, at least—to adequately curb harmful content’s spread.

However, some new research by two behavioral scientists suggests there is a “promising avenue” for reining in harmful content. Unfortunately for countries like the U.S., it does require a little less attachment to “free speech”—and a little more inspiration from overseas.

Marian-Andrei Rizoiu, of the University of Technology Sydney, and his coauthor, the Swiss Federal Institute of Technology in Lausanne’s Philipp Schneider, have written a paper, published yesterday in the Proceedings of the National Academy of Sciences, that studied the potential effects of the European Union’s new Digital Services Act, a groundbreaking set of reforms to reduce social media’s societal harms, but which is also part of a legislative package that U.S. Congress members have said “greatly concerns” them because it’s “unfair” to American tech companies. It started rolling out last year and enters full force next February, setting standards that companies like TikTok’s ByteDance and Meta must comply with inside the EU.

One such mandate is for platforms to install human “flaggers,” whose job is to identify and remove harmful content within 24 hours. It’s not a ton of time for content to exist online, but when it concerns things like the spread of disinformation, hate speech, or straight-up terrorist propaganda, it turns out to be plenty.

“We’ve seen examples on Twitter where the sharing of a fake image of an explosion near the Pentagon caused the U.S. share market to dip in a matter of minutes,” Rizoiu, the lead author, explained in a press release.

But social media’s rapid dissemination has raised concerns that the Digital Services Act’s moderation policies might prove ineffective, given how fast posts can go viral. “There were doubts about whether the new EU regulations would have an impact,” said Rizoiu.

To probe those fears, he and Schneider created a mathematical model to analyze how harmful content spreads across social networks. It used two measures to assess the effectiveness of the Digital Services Act’s moderation: the content’s potential for harm (how viral does it go), and its half-life (how long it takes for the content’s spread to reach its halfway point).

The researchers write that X, formerly Twitter, is said to have the shortest half-life, pegged by previous research at under 30 minutes. Facebook comes next at a little more than 100 minutes, followed by Instagram (20 hours), LinkedIn (one day), and finally YouTube (well over a week at 8 days). They say their findings support the argument that harm reduction is possible even on platforms like X where information moves lightning-fast. Their model suggests that moderation 24 hours after a post goes live can still reduce its reach by as much as 50%.

They explain that while itself bad, one harmful post actually has a greater negative impact because it ignites a “self-exciting point process”—a situation where if an event occurs, it increases the probability that subsequent events will follow. As the study authors write, “It draws more people into the discussion over time and generates further harmful posts,” similar to the age-old game of Telephone, but with the message getting more harmful over time instead of just less consistent. If left unchecked, the authors write, “[a post’s] ‘offspring’ can generate more offspring, leading to exponential growth.”

They also note that in their model, the more harmful the content, the greater the reduction effect, which demonstrates how moderation works by breaking the “word-of-mouth cycle” where the content contained in the initial post starts to spread everywhere exponentially.

Meanwhile, other countries—the U.S. among them—are in the midst of mulling their own regulatory responses, and the EU’s forerunner to their efforts already seems to be inspiring more creative solutions from the social media giants. This month, TikTok announced measures to make it easier for users, in Europe only, to report illegal content. “We will continue to not only meet our regulatory obligations,” it said in a statement, “but also strive to set new standards through innovative solutions.”

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the final deadline, June 7.

Sign up for Brands That Matter notifications here.

CoDesign Newsletter logo
The latest innovations in design brought to you every weekday.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Clint Rainey is a Fast Company contributor based in New York who reports on business, often food brands. He has covered the anti-ESG movement, rumors of a Big Meat psyop against plant-based proteins, Chick-fil-A's quest to walk the narrow path to growth, as well as Starbucks's pivot from a progressive brandinto one that's far more Chinese. More


Explore Topics