advertisement
advertisement

Fixing Section 230–not ending it—would be better for everyone

Sensible tweaks to the law that protects social media platforms from liability for content could make critics happy. It might even help the platforms.

Fixing Section 230–not ending it—would be better for everyone
[Photos: MichaelJay/iStock; block37/iStock; Ian Tuck/Unsplasj]
advertisement
advertisement
advertisement

A section of a 1996 bill—part of the Communications Decency Act—has received a lot of attention lately, mainly driven by President Trump threatening to veto the $741 billion Defense bill unless it was immediately removed. On December 23, he followed through on this citing the bill, “facilitates the spread of foreign disinformation online, which is a serious threat to our national security.” However, only days later, Congress overwhelmingly voted to override Trump’s veto–the first this has happened during his term.

advertisement
advertisement

One of the unusual things about Section 230 (as it is often referred to without even referencing the larger legislation that contains it) is that it has been attacked for years by leaders from across the political spectrum, for different reasons. In fact, President-elect Joe Biden said earlier this year that “Section 230 should be revoked, immediately.” The protection it provides to tech companies against liability for the content posted to their platforms has been portrayed as unfair, especially when content moderation policies are applied by companies in ways that their opponents view as inconsistent, biased, or self-serving.

So is it possible to abolish Section 230? Would that be a good idea? Doing so would certainly have immediate consequences, since from a purely technical standpoint, it’s not really feasible for social media platforms to operate in their present manner without some form of Section 230 protection. Platforms cannot do a perfect job of policing user-generated content because of the sheer volume of content there is to analyze: YouTube alone gets more than 500 hours of new videos uploaded every minute.

There is a huge difference between these systems being pretty good and being perfect.

The major platforms use a combination of automated tools and human teams to analyze uploads and posts, and flag and mediate millions of pieces of problematic content every day. But these systems and processes cannot just linearly scale up. You can see extremely large-scale copyright violation detection and takedowns, for example, but it’s also easy to find pirated full-length movies that have stayed up on platforms for months or years.

advertisement

There is a huge difference between these systems being pretty good and being perfect—or even just good enough for platforms to take broad legal responsibility for all content. It’s not a question of tuning algorithms and adding people.  Tech companies need different technology and approaches.

But there are ways to improve Section 230 that could make many parties happier.

One possibility is that the current version of Section 230 could be replaced with a requirement that platforms use a more clearly defined best-efforts approach, requiring them to use the best technology and establishing some kind of industry standard they would be held to for detecting and mediating violating content, fraud, and abuse. That would be analogous to standards already in place in the area of advertising fraud.

advertisement

Only a few platforms currently use the best available technology to police their content, for a variety of reasons. But even holding platforms accountable to common minimum standards would advance industry practices. There is language in Section 230 right now relating to the obligation to restrict obscene content which only requires companies to act “in good faith.” Such language which could be strengthened along these lines.

Another option could be to limit where Section 230 protections apply. For example, it might be restricted only to content that is unmonetized. In that scenario, you would have platforms displaying ads only next to content that had been sufficiently analyzed that they could take legal responsibility for it. The idea that social media platforms profit from content which should not be allowable in the first place is one of the things most parties find objectionable, and this would address that  concern to some extent. It would be similar in spirit to the greater scrutiny which is already applied to advertiser-submitted content on each of these networks. (In general, ads are not displayed unless they go through content review processes which have been carefully tuned to block any ads that violate the network’s policies.)

Beware of pitfalls

Of course, there are unintended side effects that come from changing Section 230 in such a way that content is policed more rigorously and automatically, especially through the use of artificial intelligence. One is that there would be many more false positives. Users could find completely unobjectionable posts automatically blocked, perhaps with little recourse. Another potential pitfall is that imposing restrictions and greater costs on US social media platforms may make them less competitive in the short-term, since international social networks would not be subject to the same constraints.

advertisement

In the long run, however, if changes to Section 230 are thoughtful, they could actually help the companies that are being policed. In the late 1990s, search engines such as AltaVista were polluted by spam that manipulated their results. When an upstart called Google offered higher quality results, it became the dominant search engine. Greater accountability can lead to greater trust, and greater trust will lead to continued adoption and use of the big platforms.


Shuman Ghosemajumder is Global Head of Artificial Intelligence at F5. He was previously CTO of Shape Security and Global Head of Product for Trust and Safety at Google.