Facebook is changing the way it deals with signs of suicide in its network. A tool that allows users to flag potentially suicidal posts will redirect to ping Facebook’s own specialized team when signs of suicide are detected among users’ posts—a potential step toward a more empathetic Internet.
Facebook first introduced a self-harm and suicide alert in a flagging feature in 2011. But where it used to route alerts to the National Suicide Prevention Hotline in the U.S. and the Samaritans in the U.K., the social network will now cut out the middleman and handle them internally with a specially trained team.
Once a friend reports a sensitive post, Facebook sends the distressed user a message that offers access to more information or the option the talk to someone while keeping the original flagger anonymous. This could mean a faster, less intrusive response to users already in delicate emotional situations. Friends who reported the post are also linked to local emergency responders, prompting users into action.
This and Facebook’s push earlier this year to include Amber Alerts in news feeds seem to indicate the social network is using its platform for good. While Facebook certainly has an interest in protecting its access to user data, it also appears to be using that data to protect users.
Admittedly, this is a tightrope for the social giant. Approaching users who may be in danger is a delicate business–and not one it’s delved into before. Just last fall, U..K-based Samaritan launched and then was forced to pull its Radar app due to privacy concerns. The app used Twitter to detect suicidal posts and alert a user’s followers.
But Facebook’s suicide prevention 2.0 efforts are baked right into its interface, perhaps finally giving us a next step toward the empathy we need on the Internet.