advertisement
advertisement

Two ways social networks could control toxic content

Ron Porat, CTO of L1ght, discusses why platforms are failing to keep their sites free of hate speech, harassment, and other inappropriate content and how to fix it.

Two ways social networks could control toxic content
[Source images: SilverV/iStock; WireStock/iStock]

From headlines about YouTube allowing racist and homophobic harassment in the comments section to revelations that GIPHY is playing host to child pornography and white supremacy imagery or even moderators accusing Facebook for giving them PTSD, it’s clear that platforms across the board are failing to keep their sites free of toxic content or approach moderation properly.

advertisement
advertisement

It’s not for lack of effort. Social media companies invest in programs, many driven by AI, to help keep experiences safe and positive for their users. Many employees are parents themselves or identify with groups that experience online bullying. They certainly didn’t set out to create a breeding ground for harmful content. So why are these efforts falling short?

The short answer is: We’re not leading with empathy.

Too many lines of code, too many scaling and availability issues, too many upgrades to firmware. We’ve gotten so engulfed in the details that we’ve forgotten the basic human trait of empathy. Behind those lines of code, long architecture meetings, and hardware are people and children who use their services on a daily basis. They care much more about having a safe space than optimizing server use (though these aren’t mutually exclusive).

The advent of AI offered hope to these companies of the possibility of moderating their platforms at scale, something that would have required thousands of full-time employees sifting through groups, pages, and messages by hand. This technology allows us to rapidly learn, improve, and in some cases even predict, but it’s only as good as the data it’s fed and the developers creating the parameters for the algorithms.

Online predators don’t think like mathematicians or data scientists, so of course programs that look at people through the narrow tunnel-like vision of a coder or a mathematician aren’t doing a good job at spotting nuances in human interactions that could signal something nefarious is happening. These narrow parameters totally missed out on the fact that people exploit weaknesses in others in order to hurt them (shaming, bullying, hate, trolling, and the like). It’s an unfortunate part of humans’ current nature.

Every big social media company has programs in place to try to address this issue, but none do it in an effective way that detects the nuance of language. For example, Facebook employs human content moderators in addition to their AI to try to better understand the context but has no training in place to help them properly prepare for the toxic and intense content they’d be viewing. Recently, a group of these moderators sued Facebook for emotional trauma and workplace-caused PTSD as a result. But human-based moderation, even if it didn’t have a lasting negative psychological impact, simply doesn’t cut it anymore. Toxicity has become too advanced and widespread for people alone to track it all.

advertisement

Google’s counter-abuse technology team has Perspective, which is a step in the right direction. But it is easily confused by context of language. Microsoft also has PhotoDNA, but it is limited by analysis of already known illicit content, so it can’t track new images until they have spread wide enough to be added to the database. We’re long overdue for a more effective means of moderation, one that gets ahead of, if not at least keeps pace with, toxic content.

This has become a personal mission for me. I worked in technology for decades, but things changed when I found my son was being targeted by a predator while playing a game online. I wondered how this could have happened and why more wasn’t being done to prevent it. Thankfully nothing bad happened because he was aware enough to bring it to my attention, but it made me want to use my expertise to create real change.

That’s when I started L1ght with my friend (and investor on the Israeli version of ‘Shark Tank’) Zohar Levkovitz.

We approached the problem from a 360-degree perspective, incorporating aspects of anthropology, sociology, psychology, and philosophy to train the algorithm to think like both children and predators. For a computer to analyze the human psyche, it must have knowledge from the world of behavioral science, not just numbers and statistics, and of course, behavioral and linguistic context.

This doesn’t mean Facebook should hire thousands of psychologists to review its data but rather that mathematical classifiers generated by machine learning experts should be defined, researched, and mentored by those who study the human psyche. Toxicity with all of its flavors can only be identified as part of continued communication, not just a single sentence that might be completely out of context. It’s the context piece that’s critical to an algorithm’s success and what allows it to scale. Thousands of false positives that require human review and pull resources can be as frustrating as hundreds of missed instances of bullying or exploitation.

Here’s an example:

advertisement

Person A: “Omg, I’m gonna kill myself. Can’t do this anymore.”

I don’t have to tell you how a simple algorithm would categorize this statement, automatic flag for review. But with context the outcome is completely different:

Person A: “Omg, I’m gonna kill myself. Can’t do this anymore.”
Person B: “What happened? Why the drama?”
Person A: “I have an exam tomorrow, and I’m still procrastinating. I’m going to die.”

Human nuance in a language is one of the biggest remaining challenges in AI adoption. Context is crucial if we are to make the right decisions, and two critical things need to happen if we truly want to solve this problem.

For one, tech companies’ DNA needs to radically change. It should contain people from all fields of human knowledge, pluralism of gender, thought, age, and the like, not just coders and DevOps specialists that tend to have similar background and thinking.

Another is that social networks must understand that they have an obligation to their users to create a safe space free from illegal and harmful content. This includes predicting and stopping in real-time before something terrible happens. It’s no longer enough to simply remove toxic content when it’s found. There must be systems in place that truly understand language and context.

advertisement

Only when these two prerequisites–empathy and context–are met will machine learning experts be able to successfully do their job and create toxic behavior classifiers that are accurate and effective at stopping online toxicity.

Human interactions are nuanced; our algorithms need to be, too.


Ron Porat is the cofounder and chief technology officer of L1ght.

advertisement
advertisement