Last week, within hours of deadly terror attacks in Istanbul and Dhaka, Facebook activated its Safety Check feature, marking roughly three dozen times that the company has let people in the area of a disaster instantly tell their friends and relatives if they were safe.
But after Sunday’s deadly truck bombing in Baghdad, striking a crowded shopping district and killing at least 250, a secondary outrage grew online: Facebook didn’t activate the feature for users in Baghdad until the following day, July 4th, at 6:55pm local time.
“The Facebook’s safety check-in for Baghdad comes about 30 hours late from the actual explosions,” Razbar Sulaiman, a hackathon organizer and UN specialist who lives in Iraqi Kurdistan, wrote in a blog post, echoing frustrations on social media. “Did it seriously take 30 hours after the explosions to create/consider the safety check-in? I’m extremely disappointed.”
Initially, a Facebook spokesperson told Politico that the feature was not deployed following the bombing on Sunday, because “she noted the feature is not used during longer-term crises, like wars or epidemics, because such emergencies have no clear start or end, making it difficult to determine when an individual is ‘safe.'”
In an email to Fast Company, however, a spokesperson confirmed that Safety Check did roll out in Baghdad on Monday, but that it was triggered due to a feature introduced last month, “community-generated Safety Check,” which is intended to initiate the Safety Check process after a critical mass of users are discussing a crisis on the network, rather than requiring an engineer or employee to begin the process.
“In June we began testing features that allow people to both initiate and share Safety Check on Facebook,” she said, adding that the Safety Check sent out in Orlando last month after the deadly nightclub shooting there was also community-generated.
Marcy Scott Lynn, of Facebook’s Global Public Policy team, added more detail in an email on Wednesday: “While we have improved the launch process to make it easier for our own team to activate more frequently and faster, we believe that we can make Safety Check even more relevant for people when they need and want it most by empowering communities to identify and elevate local incidents.”
The new feature—and the confusion about how Safety Check was activated in Baghdad—reflects the challenges Facebook faces as it seeks to play a larger role in humanitarian crises.
Facebook’s public issues with Safety Check are a “great example of unintended consequences,” says Timothy Coombs, a professor and crisis expert at Texas A&M University. “The company is not in the emergency notification business. It is a sideline, not their core business, so we should not expect them to carefully sort through every global event. The community-based idea takes that decision making and responsibility out of their hands.”
Safety Check allows Facebook to push a message following a disaster to users who are in the affected area, asking, “Are you safe?”; Their replies are then automatically distributed to their networks and prominently displayed.
The project is managed by Facebook’s Social Good team, which organizes a number of other Facebook safety and humanitarian efforts, like a digital Amber alert system and a charitable donation system. A Facebook intern began developing Safety Check at an internal hackathon after seeing Japanese residents turn to social media to communicate with loved ones after that country’s 2011 earthquake and tsunami.
So far, about a billion people have received such notifications since the feature was first used in 2014, after the Philippines was struck by Hurricane Ruby, according to the company. Recently it’s expanded its use from natural disasters to incidents of terrorism, beginning with the deadly attacks in Paris in November.
“The Safety Check stuff that we’ve done, where 150 million people were notified of their friends being safe in the [Nepal] earthquake,” CEO Mark Zuckerberg told Fast Company’s Harry McCracken last year, “you can only do that if you’ve mapped out what people’s relationships are, and you have a sense of where people are in the world, and you have a tool that they’re checking every day.”
After deploying Safety Check in response to the attacks in Paris, the company won praise for enabling people to quickly and easily notify their contacts that they were safe. But others criticized Facebook for not deploying the feature when, just a day before the Paris attacks, a pair of suicide bombers killed dozens in Beirut.
“Since we activated Safety Check in Paris, we have heard positive feedback about how reassuring it is to receive notifications that a friend or loved one is safe,” wrote Alex Schultz, the company’s vice president of growth, in a blog post last year. “I personally have received several from people I know and love and have felt firsthand the impact of this tool. But people are also asking why we turned on Safety Check in Paris and not other parts of the world, where violence is more common and terrible things happen with distressing frequency.”
Schultz and, in a separate post, Zuckerberg, explained that the policy had initially been to use the feature solely for natural disasters, until the company noticed a burst of activity on the social network after the Paris attacks.
“There has to be a first time for trying something new, even in complex and sensitive times, and for us that was Paris,” wrote Schultz.
Facebook’s isn’t the only “safety check” for those who’ve survived disasters: Google’s Person Finder and various open source systems that humanitarian organizations can deploy after disasters offer similar services. But the rapidly growing use and scale of Safety Check raises a number of questions, besides when and where its deployed.
“There is a very slippery moral slope in determining what is a disaster, especially from the safe confines of Silicon Valley,” Wayan Vota, a cofounder of ICT Works, a nonprofit focused on international development technology, wrote on the group’s blog. “I don’t feel comfortable leaving it up to Facebook to decide which disasters are worthy of social media support or not. I would much rather see the Safety Check feature managed or at least influenced by local Red Cross or Red Crescent organizations and government emergency response agencies.”
Others have pondered what kind of stewardship a company like Facebook should have over a system like Safety Check. “What if a person registers that they are okay on one service but not another? What if someone marks someone else as safe (a useful option that Facebook provides) based on inaccurate information?” Slate‘s Lily Hay Newman wondered last year. “And what if you don’t want to answer the question ‘are you safe?’ when you’re lying in a hospital bed after a trauma?”
What other unintended effects come from Safety Check notices spreading across the world’s largest social network, and following a terrorist attack in particular, is just a matter of speculation for Robinson Meyer, writing at the Atlantic last year. “On the one hand, maybe it’s the sole piece of information you need to know after a major attack: “The people you love are safe. You may pay attention to other horrors than these.” Or maybe it reinforces terror’s message.”
Still, some humanitarian groups have come to rely on the tool during crises. “We’ve had personnel in the vicinity of a couple of incidents, which whether natural or manmade, our personnel will check in which obviously makes a huge difference to us to know everybody’s safe and okay,” says Rebecca Gustafson, senior advisor for global communications at International Medical Corps.
The group has also worked with Facebook to discuss disaster planning, and Gustafson says she’s sympathetic to the trade-offs between declaring a disaster too quickly versus waiting for more information, particularly when an organization doesn’t have staff of its own in the affected areas.
“The biggest thing we always say in emergency response is bad information is worse than no information,” she says. “People can criticize emergency responders as taking too long, but the tech community moves at warp speed, and I think being able to take this extra beat to say, is this, is this not, is worth it to verify.”
Per Aarvik, the Norway-based president of the Standby Task Force, a volunteer-led humanitarian group that coordinates geographical and other information after sudden-onset disasters, issued cautious praise for Safety Check.
“Both Google and Facebook are global powers, and just as government or large corporations, they are obliged to use their tools for good when needed,” he said. “And the more they can do it in kind of an unselfish mode, the better, because they have the really sources to go beyond their day-to-day mission during emergencies or times of crisis.”
The new effort to bypass a human engineer or employee at Facebook when deciding to deploy Safety Check is intended in part to grapple with the technical and political challenges of running the world’s largest such service. Last month, Facebook made it easier for non-engineering employees to activate the feature.
“Over the past few months, we have improved the launch process to make it easier for our team to activate more frequently and faster, while testing ways to empower people to identify and elevate local crises as well,” said the company spokesperson.
After Paris, Schultz said Facebook would continue to evaluate when Safety Check should be used, though he seemed to caution it might not be appropriate for every disaster. For instance, in the case of an ongoing disaster or recurring violence, the company is reluctant to risk letting users tell their loved ones they’re safe, only to then be hurt or killed soon after.
“In the case of natural disasters, we apply a set of criteria that includes the scope, scale, and impact,” Schultz wrote. “During an ongoing crisis, like war or epidemic, Safety Check in its current form is not that useful for people: because there isn’t a clear start or end point and, unfortunately, it’s impossible to know when someone is truly ‘safe.’”
After the Baghdad truck bombing on Sunday, a spokeswoman for Facebook told Politico that it has worked with the “global humanitarian community” to identify conflict zones where it won’t deploy the feature, including in Iraq, Syria, Afghanistan, and Yemen.
Instead, it was by automatically detecting user discussion around the Baghdad bombing that the “community generated” Safety Check kicked into gear. But the alert was only sent out the following day, and amid a torrent of complaints on social media that Facebook was paying more attention to terrorist attacks elsewhere.
Facebook has been experimenting with new ways of deploying Safety Check. Last month, the company made it possible for “trained teams” around the globe to deploy the feature without assistance from engineers, and rolled out an internal “Crisis Bot,” so technical issues can be more quickly rooted out. (See sidebar above.)
When it works properly, the “community-generated” Safety Check will enable Facebook to defer to the wisdom of its users as to when the feature should be considered, rather than requiring the company’s staff to first evaluate situations around the globe to determine whether Safety Check is appropriate.
“Safety Check is just one tool that people use during times of crisis or disaster, and should be seen as a symbol of how important and impactful technology can be in helping people,” Marcy Scott Lynn, of Facebook’s Global Public Policy team, wrote by email, “but this is still very early days.”