advertisement
advertisement
The Fast Company Executive Board is a private, fee-based network of influential leaders, experts, executives, and entrepreneurs who share their insights with our audience.

What’s the ROI on safety?   

Technology oversight alone will not keep people safe online. Tech companies must develop a teaching and coaching mentality for users as well.

What’s the ROI on safety?   
[Tierney/Adobe Stock]

​As the creator economy continues to reach new heights, the challenge of ensuring online safety is expanding at an equally rapid pace. New products, features, and capabilities—enabled by technology—necessitate new, better, and faster safety solutions.

advertisement
advertisement

The safety solutions that have worked in the past—those capable of handling the more traditional, asynchronous communications methods like email, comments, and social media posts—don’t suffice for livestreaming video, nor will they support the interconnected world of the future. We need a much more sophisticated, complex approach that combines the power of humans and AI.

Livestreaming really is live. To be sure, this entertaining, fun, and creator-welcoming medium happens in real-time. Unlike live television broadcasting, which has a two- to three-second delay to handle mishaps, insert bleeps, or cut the feed, livestreaming offers no opportunity for takebacks.

That is why the Oasis Consortium has developed user safety standards. As a founding member of this think tank, we are working with fellow trust and safety experts from metaverse builders, industry organizations, academia, nonprofits, government agencies, and advertisers to “accelerate the development of a better, more sustainable internet.”

advertisement
advertisement

I encourage my fellow CEOs and technology founders to investigate their own industry bodies so we can build expertise across industry siloes.

AI + HUMAN OVERSIGHT

Combining artificial intelligence (AI) with human intervention is the best solution available today as AI doesn’t yet grasp nuance the way human beings do.

When a livestream starts, AI is vital in monitoring on-screen visuals and direct messages, scrolling chat box data, and analyzing the live voice track, as well as capturing, reviewing, and storing the transcript in real time. However, when moderating content, context is key, and AI isn’t always attuned to this. As a result, human oversight is crucial here to make real-time decisions regarding what is—or isn’t—toxic or harmful. Safety for our creators, and their livestreaming audiences, means making sure they’re protected from harassment, hate speech, scams, or worse.

advertisement

Algorithmic detection of abuse leverages predictive models that change all the time, usually based on what are called behavior profiles. Data science methods categorize patterns of what constitutes “normal” (read: safe) behavior online. It then identifies anything that deviates from this “norm” as a potential malicious entity.

As the industry shares information across sectors, we will be able to pinpoint and prevent damage from trolls and bots. When a troll escalates hate speech, harassment, or other antisocial behaviors, the AI will flag it instantly and take action, in partnership with human moderators. As an industry, we need to get so good at this that the experience onscreen is immediate and seamless because we’re that quick in dealing with safety issues.

CONSUMER POWER

Technology oversight alone will not keep people safe online. Tech companies must develop a teaching and coaching mentality for users as well. Users are not passive consumers, they are active participants who have a role to play. It is up to us to give them the tools to support personal agency, know where to find resources, and take action to combat both overt harassment and covert problematic behavior.

advertisement

The general counsel regarding online harassment may have once been “Oh, just ignore them.” But that’s not possible as we live increasingly digital lives. As the father of three young children, I wish we could keep everyone safe through tech-based interventions. But we can’t. We do need people to be equipped with the tools of engagement. To be empowered to protect themselves by clicking “block” or “mute” quickly. Sadly, these don’t always solve the problem. Sometimes victims of online hate will choose to go further—but they need support to do so effectively.

After experiencing online hate firsthand, some individuals decide to turn the tables, creating personal advocacy organizations and crisis hotlines. The best known include the games critic Anita Sarkeesian and games designer Christopher Vu Gandin Le’s games and online harassment hotline, and award-winning game developers Zoë Quinn and Alex Lifschitz’s Crash Override.

In the past 16 years, grassroots network HeartMob has trained more than 50,000 people in harassment intervention and published more than 15,000 stories of personal experience to help others learn what to do when it happens to them. On a more formal basis, PEN International, the global advocacy network for writers since 1921, has developed guidance for practicing “counterspeech” to both expose and engage persistent trolls.

advertisement

I’m often asked, “What’s the ROI on safety for us as technology leaders?” The question is a strange one. Safety is like oxygen, like water. A social community cannot thrive without strong moderation and practices. We don’t ask about the ROI of oxygen or water. Without trust and safety in our livestreaming spaces, we can’t build sustainable businesses. The work of keeping our communities safe will never be over.


Geoff Cook is a serial entrepreneur, CEO of The Meet Group, and co-CEO of ParshipMeet Group.

advertisement
advertisement
advertisement