advertisement
advertisement

How Facebook’s AI Is Helping Save Suicidal People’s Lives

The company had been testing AI technology on text posts in the U.S. Now the technology is global and also ready for live video.

How Facebook’s AI Is Helping Save Suicidal People’s Lives

Earlier this fall, a woman in Alabama brandishing a knife went live on Facebook and made it clear that she was suicidal. But before it was too late, the social-networking platform contacted the authorities, who were able to find her and get her to the hospital.

advertisement

The company has long had a large community operations team that, among other things, is responsible for contacting law enforcement or other first responders when they get credible user reports that someone is in danger of taking their own life. But in this case, it was Facebook’s artificial intelligence systems that sounded the alarm.

In the now-famous manifesto laying out his vision for how Facebook plans to build a better global community, CEO Mark Zuckerberg noted the social network’s unique power to help people in danger of self-harm given the size and breadth of its user base and the myriad ways those 2 billion-plus people communicate across its systems. In March, the company announced a pilot project in the U.S. that would leverage AI to proactively identify Facebook posts signaling a user’s suicidal intent and expedite bringing those posts to the community operation team’s attention.

Now, Facebook says it has rolled out the AI tools worldwide, and expanded them to cover both text posts and live video. In the past month, it says, its AI has identified more than 100 instances around the globe that required intervention by first responders. It has also found that reports accelerated by proactive detection–those requiring immediate attention–have been escalated to first responders twice as fast as those reported by users.

At the same time, the company said it has improved the technology that helps its community operations team more efficiently review posts and videos as well as contact the actual first responders who will knock on the door of people in danger of self-harm. It’s also increasing the number of people who manually review posts or videos for suicidal intention.

The Scope Of The Problem

According to the World Health Organization, there’s a suicide somewhere on earth every 40 seconds. Worse, suicide is the second-leading cause of death for people aged 15-29. Combine those stats with the fact that at least one in five of the billions of videos broadcast on Facebook is live, and you get a sense of the scope of Facebook’s potential challenge.

For the last 10 years, Facebook has been refining its approach to suicide prevention, says Jennifer Guadagno, who leads the research on those efforts. That work starts with a baseline understanding—shared by experts at about 80 partners around the world that the company consults—that social support is one of the best ways to prevent suicide: Having a friend reach out, asking how you’re doing, even just saying hello, or asking you out to coffee.

advertisement

Facebook has also worked with countless people who have firsthand experience with suicide or suicide attempts by loved ones to learn how to best structure the company’s systems. And while much of what Facebook does in other areas on its platform is geared toward its bottom line, there’s a sense in the suicide prevention community that the company is genuinely interested in addressing this particular problem.

“I think Facebook has been a responsible partner,” says Nancy Salamy, the executive director of Crisis Support Services of Alameda County. “I think that they work with crisis centers, and their intention is to try to help somebody on the platform. They want to get [suicidal users] some help.”

There have, of course, been several instances of people committing suicide on Facebook Live, including two cases last January, one in April, and one in October. Facebook removes such videos because “we want to protect the audience from seeing content that can potentially be upsetting,” says Monika Bickert, head of global product policy.

But while Facebook bans posts or videos that promote suicide or self-injury or encourage others to hurt or kill themselves, it does allow users to share posts or videos expressing their own suicidal intentions. “There’s a tremendous potential benefit there,” says Bickert. “They have a way to engage with their community. They’re feeling alone, and suddenly they’re not alone. They’re connecting to people who love them.”

To John Draper, director of National Suicide Prevention Lifeline, which advises Facebook on its efforts, it’s essential to understand why a suicidal person would share what’s happening to them on social media. It “goes to the primary reason that we began working with Facebook,” Draper says. “If I’m posting to my friend group that I’m suicidal, it’s not likely that I’m wanting them to report me. It’s likely that I want them to support me.”

As an example, Bickert pointed to a case last January where a 28-year-old man in Bangkok went on Facebook Live and threatened to hang himself. A friend of his was watching the livestream and got in touch with people nearby who were able to intervene and save him.

advertisement

AI Can Help

As with many things on its platforms–face and voice recognition, surfacing relevant news feed posts, showing meaningful videos, combatting terrorism, and more, Facebook sees artificial intelligence as a key to doing a better job at tackling suicide prevention.

With its recent efforts, it’s applying AI to both text posts and live video, and doing so across the world.

There’s several ways the system works, starting with identifying signals in news feed posts or live videos that suggest someone may be in danger of harming themselves, or that users are concerned about the suicidal intentions of a friend or family member.

Among those signals are comments that someone might leave on a post or live video suggesting they’re worried about a friend. Similarly, Facebook’s AI tools look for text in posts like, “I want to die,” “I’m going to kill myself,” “I am ready to kill myself,” or “I hate my life and I want to die.”

“What we’ve done is look at posts previously reported to us, and that we have flagged and escalated for follow-up,” says Guy Rosen, Facebook’s vice president of product management. “We looked for patterns . . . On many posts like this, there are comments from friends and family saying things like, ‘Are you okay?’ or, ‘Can I help you?'”

For live video, the system looks for similar types of comments from viewers as the stream progresses. In some cases, Facebook’s AI has surfaced videos of people threatening self-harm that were never reported by users.

advertisement

When the system identifies new posts or videos with those types of phrases, it automatically sends them to the community operations team, where they get priority for review. And of course, speed is of the essence when working to help someone who’s actively suicidal.

Context Is Everything

Draper says AI is well-received in the suicide prevention community, noting that it’s often “more accurate than assessment tools overall,” and that there are a number of existing AI-based tools used to help identify troubled people or put them in touch with help.

But he also says context is everything in determining whether or not to take action based on comments flagged by an AI system.

“A person might say, ‘If I get an F on my test, I’m going to kill myself,'” Draper says. “It might be ‘LOL, just kidding,’ but the ability to just kind of gauge whether that individual is at risk because they’ve said” something is important.

That’s why Facebook’s reviewers examine not just the post flagged by the AI system or reported by a user, but also at surrounding posts. They’re looking to see if there are other posts that indicate suicidal ideation.

“It’s not common to say they’re suicidal” without other details, Draper says. “They’ll typically have said other things online that indicate that they’re having problems.”

advertisement

When it comes to news feed posts, that’s a simple matter of looking at older posts to see if there’s anything corroborating what might otherwise just be a flippant remark. With live video, it’s much trickier.

That’s where another new tool at the reviewers’ disposal has become vital.

When the AI system or a user report brings a live video to the reviewers’ attention, they need to figure out where in the video the streamer indicated their intention to harm themselves, and they need to do so quickly.

Now, community operations team members have the ability to pull up two screens. On one, they see the flagged video, and can see visual cues indicating moments where there was an increase in comments–such as might be the case if someone suddenly said on video that they were going to commit suicide. On the other, they can quickly pull up previous videos by the same person to see if they’ve broadcast anything that would further demonstrate their suicidal ideation.

Although Rosen was hesitant to acknowledge whether the AI system has returned false positives, he noted that there are sometimes cases where “we do get reports, and do get proactive detection that our operations team does not follow up on.”

Improving Communications With First Responders

Obviously, if someone is using Facebook to signal that they might commit suicide, time is very much of the essence in getting them help.

advertisement

Even after Facebook determines where such a person is located, the company still has to find a way to communicate with the first responders who will check on them. And that has been a problem in the past, says Facebook deputy general counsel Chris Sonderby. “We’ve enabled our response to call the relevant 911 system,” Sonderby says of situations that arise in the U.S. “The challenge, of course, is that 911 networks traditionally are very localized.”

Since such calls would be coming from Facebook’s headquarters in Menlo Park, California, or its operations center in Austin, Texas, 911 centers in other states might well find themselves perplexed trying to understand the situation. That’s why Facebook has been working hard to build bridges in the first responder community around the country. “We’ve done that through our Trust and Safety outreach–people literally going out to meetings and conferences” and talking about the company’s suicide prevention efforts.

At the same time, Facebook has been augmenting its technical capabilities so that community operations team members can automatically reach the appropriate 911 system if they need to make a call. “We’ve had a great deal of success,” Sonderby says, “from our building awareness and also from our technical side at being much better” at getting help for those who need it.

Salamy, from the Crisis Support Services of Alameda County, says she’s been impressed with Facebook’s suicide prevention work, and with its efforts to establish relationships with experts in the field. “I know that when I go to a national conference,” Salamy says, “their presence is there. They usually have workshops there, and they’re very interested in working collaboratively in the suicide prevention field.”

Draper, who has been advising Facebook without pay for a decade, agrees that Facebook is at the head of the pack among technology companies trying to tackle the suicide problem. “They started out, like all social media companies, [asking,] if someone is suicidal, how do we get them help?” Draper says. “But Facebook has always been very open to all the ideas and suggestions from the suicide prevention community . . . They’re much more proactive than many other social media companies, and that’s what makes them a leader.”

About the author

Daniel Terdiman is a San Francisco-based technology journalist with nearly 20 years of experience. A veteran of CNET and VentureBeat, Daniel has also written for Wired, The New York Times, Time, and many other publications.

More