Over the years, social media platforms have gone on the record countless times about plans to “finally” curb harassment, hate speech, and misinformation. Last month was no different: YouTube announced a ban on all anti-vaccine content, Facebook said it had created a new policy against “coordinated social harm,” and Twitter unveiled new tools that it said would lead to better filtering and limiting replies.
There is reason to be optimistic. Technology has now developed to a point where it can preemptively and fairly effectively weed out the majority of unseemly content. Rather than solely rely on users themselves to report the misconduct—which many platforms still do—these new tools incorporate artificial intelligence and machine learning to nip harassment in the bud. Their aim is to create something automated that works more like a referee: a system that can call out harassment right then and there when it happens, in real time, mid-game.
But what’s puzzling is that even amid all of these technological advances, we haven’t really seen much progress when it comes to actually curbing hate speech on social media. Instead, the same cycle has repeated, time and again. The public is now beginning to understand that the underlying problems that lead to hate speech are extremely complex.
On a recent episode of 60 Minutes, a whistleblower went on record to say that Facebook has in fact been misleading the public on the progress it’s making against hate speech. The whistleblower, Frances Haugen, an ex-Facebook employee, said that she witnessed firsthand how in curbing hate speech, the company had to decide between its own financial gain and the public good. Haugen said that “Facebook, over and over again, chose to optimize for its own interests, like making more money.”
As a consultant who has worked in this field for more than a decade, in both social media and in the gaming industry, I have arrived at a similar conclusion: that the problems holding back effective online hate speech moderation are systemic.
To truly curb online harassment, we need to have a serious conversation about the fundamentals of these online communities, their incentives, and their relationships to their users.
In my view, social media platforms should look to the gaming industry for answers. While they’ve been struggling with a new wave of hate speech, gaming companies have in recent years been much more aggressive and hands-on when it comes to actual moderation.
Why is the gaming industry making speedier progress than social media platforms on this front? They’ve done a better job at answering the following three questions when looking at building truly effective anti-hate strategies online.
1. What’s the purpose of this platform?
When social media platforms try to curb harassment, they often run into angry cries from users and the media, criticizing them for restricting freedom of speech. This is because the social media companies have framed their purpose as being platforms for broadcasting information and free speech.
In the gaming world, by contrast, restricting “freedom of speech” isn’t that big of an issue. Moderators can block out messages at random and face much less resistance for it. No one logs on to Call of Duty solely to spread misinformation about vaccines; the game is the main draw, not the promise of a captivated audience. That means gaming companies are free to take aggressive measures whenever needed so long as the game remains enjoyable.
Online communities need to think carefully about what they’re promising their users and whether they can truly deliver on that experience. Users will hold them accountable for it in the end.
2. What’s the revenue model?
In advertising-based revenue models—which most social media platforms fall under—the fear of bad PR has a lot of weight. Curbing hate speech too aggressively on social media, for example, might create a backlash regarding freedom of speech, which then in turn alienates advertisers. Equally so, not curbing hate speech strongly enough has the potential to create a toxic community that also gets a bad rap. That then drives away the advertisers who are disappointed in the platform not doing enough.
When online communities are hung up on what advertisers want or need, they’re less likely to take bold action and try aggressive tactics to eliminate hate speech. This juggling of the needs of many different parties—the advertisers, the users, the company itself—means that no side ends up happy and the business ultimately suffers.
The companies behind games—which are often more concerned with user acquisition and retention, and in many cases derive their entire revenue from users, not advertisers—show a lot more willingness to serve the needs of the community at large. They’re not afraid to sacrifice the small number of users who don’t like the changes or t0 take action that might upset their advertisers.
The truth is, online communities are only as engaging as the people who use them. The types of strategies that put users first that we’re seeing in the gaming world make the actual product and service better, generate more trust, and ultimately turn into tangible results, bringing in revenue and profit.
Online communities need to ask themselves if their chosen revenue model is sustainable, especially when it comes to hate speech and harassment. If not addressed properly, these problems will only become more prominent as the user base grows.
3. Who’s the competition?
Entire industries can become paralyzed when no one company takes the lead. This is particularly true among social media platforms, where the market is dominated by a few players. There is stagnancy, and users don’t know how to ask for better either.
Sometimes this opens the door for new players to infiltrate the market and make big waves, as was the case with dating app Bumble. The startup reimagined a more automated but still user-driven way of curbing hate speech and ended up becoming a $14 billion competitor to incumbents.
The gaming world is distinct in that it sees new products pop up in the thousands each and every day. That competition leaves little room for complacency, even among established players. There is pressure to constantly do better by users, especially when they’re the main driver of revenue.
Online communities need to reflect on whether they’ve built the foundations of their platform in a way where, in the long run, it is actually possible to have a community free of hate. What I’m seeing now is that social media still has a long way to go—and those platforms should be looking to the gaming industry for answers on where to go next.
Sarita Runeberg is head of gaming at global tech agency Reaktor.