Fast company logo
|
advertisement

In a new report on disinformation from the Aspen Institute, experts propose revising Section 230 and forcing social media companies to submit to data audits.

Katie Couric, Rashad Robinson, and Chris Krebs say it’s time to pull immunity for social media platforms

[Source images: prettywoman/iStock; Andy Feliciotti/Unsplash; Towfiqu barbhuiya/Unsplash]

BY Ruth Reader4 minute read

Misinformation hit a crescendo during the pandemic, sowing distrust in COVID-19 vaccines and inciting riots at the Capitol. Now a coalition of experts on misinformation and disinformation are making a specific set of recommendations to lawmakers on how to fix the issue–and big tech might not be so happy.

Most notably, the proposal calls for changes to Section 230, the controversial part of the 1996 Communications Decency Act that protects online platforms from getting sued over user-generated content. Research center and think tank Aspen Institute brought together a who’s who commission of experts on disinformation  to illuminate the problem and offer strategic steps to address it. The commission’s chairs include journalist Katie Couric, civil rights leader and president of Color of Change Rashad Robinson, and Chris Krebs, the former director of the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency.

Spread via the internet, disinformation and its close cousin misinformation have contributed to a series of public harms over the past decade, including interference in the 2016 U.S. elections, disruption of pandemic-related public health efforts, fomentation of genocide in Myanmar, and the January 6th siege of the Capitol. Disinformation, intentionally misleading information, is engineered to go viral, taking advantage of social media algorithms that favor outrageous perspectives. Misinformation, false information with no clear intent to deceive, similarly keeps slipping past social media companies efforts to curtail it.

Last month, Facebook whistleblower Francis Haugen helped explain why those mitigation strategies fail. The former Facebook product manager and member of the company’s civic integrity group called out the social network for misleading the public about how much it actually does to protect users from harmful content. “The thing I saw at Facebook, over and over again, was there were conflicts of interest between what was good for the public and what was good for Facebook,” Haugen told 60 Minutes. “And Facebook, over and over again, chose to optimize for its own interests, like making more money.” For years, Facebook has ducked responsibility for content on its platform, assuring regulators and the public that it is doing its best to balance free speech while reining in misinformation and speech that incites violence and hate. Haugen’s account suggests that is not the full picture.

Social media companies have not yet been held to account, shielded by Section 230. Legislators have threatened to change the law (rhetoric that reached a fever pitch after the Capitol riots), but so far haven’t touched it. The Aspen Institute’s Commission on Information Disorder Final Report suggests removing this immunity for content that advertisers have paid to promote, as well as any content that has gone viral because of a platform’s recommendation algorithms. They also note that while free speech is a constitutional right, private platforms are not the public square and companies have the right to restrict speech.

The commission’s recommendations are thorough, going much farther than simply suggesting amendments to Section 230. The report faults the federal government for failing to understand the issues and create meaningful rules that protect the public. (“Congress…remains woefully under-informed about the titanic changes transforming modern life,” the authors write.) The commission also notes that despite Big Tech’s pleas to be regulated, industry leaders have “outsized influence in shaping legislative priorities favorable to its interests.” To guide future legislative efforts, the commission suggests the government force social media platforms to be more transparent through data audits.

One of the biggest hurdles to understanding both the effects of disinformation and the magnitude of the problem has been a lack of cooperation from the platforms themselves. Researchers often struggle to get the depth of information they need. (Facebook has been known to outright ban researchers who attempt to get this information without the company’s explicit participation.)

The report says social platforms should be required to disclose “categories of private data to qualified academic researchers, so long as that research respects user privacy, does not endanger platform integrity, and remains in the public interest.” It also says that there should be federal protections in place for researchers and journalists who investigate social platforms in the public’s interest (even if they violate the platform’s terms and conditions in the process). It suggests that Congress require social media companies to publish transparency reports that include content, source accounts, reach, and impression data for posts that reach large audiences, and offer regular disclosures on key data points about digital ads and paid posts that run on their platforms. And it calls for clear content moderation practices as well as an archive of moderated content that researchers can access.

In addition to these transparency measures, the commission asks the federal government to establish a strategic approach to countering misinformation and disinformation and the creation of an independent organization devoted to developing well-informed countermeasures. This could include efforts to educate the public on misinformation and how to discern between fact and propaganda online.

Finally, the report calls for investment in local newsrooms and diversity measures, both in newsrooms and at social media companies. To support newsrooms, the report points to the creation of a digital advertising tax, much like the one Maryland passed. The report says some of those proceeds should go towards struggling local newsrooms to bolster reputable reporting. The report also suggests incentivizing donations to local news operations through tax credits.

The report also recommends platforms hire diverse workforces to ensure that a broad spectrum of experiences are considered when companies design rules and content mitigation strategies. Rashad Robinson, president of Color of Change and one of the report’s commissioners, says that the government could play a role here. “Diversity should be part of how the government evaluates these companies,” especially their efforts to protect users, he says.

Robinson has worked for years on civil rights issues related to the web and has spent a fair amount of time talking to regulators. “These are recommendations that I fundamentally believe are actionable,” he says.

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Ruth Reader is a writer for Fast Company. She covers the intersection of health and technology. More


Explore Topics