Fast company logo
|
advertisement

Psychologists still don’t know how dangerous suicide videos that spread online are to those who might be considering self-harm. For platforms, that creates a loophole.

Researchers can’t even begin to assess the damage from viral suicide videos

[Photo: Akshar Dave/Unsplash]

BY Ruth Readerlong read

Earlier this month, a video of a suicide went viral on TikTok. The short-form video platform is now appealing to the broader social media industry to develop strategies to keep harmful content out of people’s feeds. In its plea, TikTok notes that the industry already works together to develop frameworks for suppressing child sexual abuse and terrorist-related content. When broadcasts or suicides and other violent events appear on social networks, there’s nothing users can do to ensure they won’t see them short of canceling their accounts. However, there is political pressure for these platforms to effectively control damaging content on their network, or else face tougher regulations.

Visceral content has plagued the internet ever since people were given the opportunity to broadcast their lives raw and unfiltered. This struggle has played out especially loudly on Facebook. The social network does not allow users to post any kind of self-harm or “excessive” violence. Still, enough content was getting onto the platform that two years after Facebook launched its live broadcast feature, it rolled out artificial intelligence to spot suicidal posts on its platform. It also now has an enormous team of content moderators who respond to flagged posts and pull down violent or harmful content.

That has not stopped gutting portrayals of human suffering from leaking through. The suicide that ultimately went viral on TikTok originated on Facebook over the summer. In a conversation with the BBC, Josh Steen, a friend of the person who died in front of viewers, said that he flagged his friend’s live stream to Facebook a half hour before his friend killed himself. Facebook’s system responded almost two hours later and said that the stream did not violate its guidelines, even though the suicide had already taken place at that point. Steen is now seeking answers from Facebook, though they don’t appear to be forthcoming.

In the past, similar failures to keep traumatizing content off Facebook have led to calls to put Facebook Live on a delay or shutdown the service altogether. But, as this most recent incident shows, Facebook hasn’t fundamentally changed its service to further limit this kind of content from reaching users. The moral imperative for making these changes may be obvious, but the psychological impact of viewing this kind of content is less clear. For platforms with hundreds of millions of users, negatively impacting even a small percentage amounts to a large number of people. Without a clear understanding of the human impact of seeing a suicide online, platforms don’t have enough incentive to eliminate any possibility of this kind of content making its way onto our feeds.

Why it’s so hard to study the impact of violence on the internet

It is incredibly difficult to study the impact of viewing a suicide online, but sociologists have tried to quantify the mental health impacts of social media by surveying users. For one study from last year, researchers interviewed a group of largely white female Instagram users between the ages of 18 and 29 about whether they came into contact with images or video of self-harm and how they reacted to it. They were asked if they found that content disturbing, among a list of other questions concerning their mental health and predisposition to suicidality. In the first wave, 1,262 people responded. Only 729 people of the same group responded to a second set of questions sent a month later. The researchers found that about 20% of those who saw images of self-harm sought them out. More telling, according to the researchers, was that over half of those who had seen these images had considered harming themselves in similar ways, and nearly a third said they had hurt themselves as a result of seeing this content.

The study concluded that seeing self-harm on social media might have an impact on whether someone harmed themselves in the future. However, the study has limitations, including that this study does not prove that seeing self-harm necessarily leads to self-harm—it only shows there might be a relationship between the two. It also heavily favors white women and the results could not be generalized across a broad group. Research psychologists I spoke with also have gripes with these kinds of studies. They called these sorts of studies “squishy” because they rely on self-reported data, which isn’t very reliable. In particular, people are not good at remembering an emotional reaction they had in the past.

Directly studying the impact of viewing a suicide, however, is essentially impossible because it would require participants to watch suicides, possibly repeatedly. “That kind of exposure is ultimately unethical,” says David Rudd, a former clinical psychologist who still conducts research. Rudd says about 15 years ago he and a group of researchers wanted to study suicide warning signals to understand how people react to seeing another person in emotional need.

That kind of exposure is ultimately unethical.”

David Rudd

“We wanted to actually do scenarios in the community where somebody might be distressed and see if people would approach them and actually engage them, but we couldn’t get the study approved,” he says. The review board ultimately decided that it was potentially unethical to unwittingly expose somebody to that kind of a trauma. In the end, they designed the study around a questionnaire with vignettes describing various scenarios and asking a person how they might react. “But it’s remarkably artificial,” says Rudd.

He says any scientific study that tries to look at the impact of viewing a suicide online would run into a similar problem. This presents an ethical conundrum for the research community: the very phenomenon they want to study is itself unethical. It is immoral for social platforms to expose unsuspecting viewers to traumatic content. And yet through their own failure to mitigate this kind of content, platforms allow it and the outcome of their inaction remains unknown.

A vulnerable population

This viral suicide comes at a particularly delicate time in American history. Suicide rates have been climbing since 1999. In particular, experts have expressed concern over rising youth and teen suicides. Now, under the enormous strain of a pandemic, more Americans are struggling with mental health issues. A weekly morbidity and mortality report from the Centers for Disease Control and Prevention in June noted that 40% of Americans said they experienced adverse mental health conditions in the previous week, including substance abuse. A whopping 11% said they had seriously considered suicide in the previous 30 days. With more people confined to their computer screens, traumatic content going viral becomes a more serious affair.

With more people confined to their computer screens, traumatic content going viral becomes a more serious affair.

Several studies have found a relationship between depression and social media use and in particular that higher levels of depression are correlated with more time spent online. However, the exact nature of the relationship between social media and depression is not clear. Researchers haven’t proven that social media actively causes depression. It’s possible that people who are depressed are drawn to spending more time online. It is also true that people who have experienced trauma or are depressed sometimes seek out support in online communities—a positive aspect of social networks. What we do know is that there is a population of people who spend time online who may be particularly vulnerable to having a negative reaction to seeing a suicide, especially if they weren’t expecting to see it.

How a person responds to witnessing a suicide depends on several factors, including their state of mind. Most people who see a suicide are likely to experience some sort of effect, but it may not impact them for long, according to Craig Bryan, a clinical psychologist and director of recovery and resilience at Ohio State University’s Wexner Medical Center. Immediately after watching someone get injured or die, lots of people will feel uneasy, says Bryan—it’s a natural response. “We’ll all lose sleep, we’ll all feel a little anxious, we’ll all be bothered by it,” he says. “But typically within a few weeks, about 85% to 90% of people will settle back into where they were before exposure. Resilience is the rule.”

However, he says, not all people will have that response. “There’s this 10% of those who are exposed who may have lingering affects or consequences.”

advertisement

It’s that 10% that is of concern; 10% at the scale of TikTok could be an incredible number. TikTok, which uses algorithms to serve 800 million users tailored content, has not said how many people saw the suicide that went viral on its platform. When the suicide first appeared on Facebook, it was streamed to a live audience of 200 people, including friends of the broadcaster, according to the BBC. In that same report, Facebook says it removed the video the day it went live. However, between the time it went up and the time it came down, many people downloaded and shared the video, which allowed it to eventually proliferate on TikTok.

The concern about contagion

One of the biggest concerns with public suicide is the potential for what’s called “suicide contagion,” a phrase that on its face seems to imply that suicide can be caught just like a cold. Several studies have shown that memorializing a person who dies of suicide can lead to suicide contagion, where people already vulnerable to suicidal thinking may be galvanized to act because of a belief they will be memorialized. On social media, the viral spread of a suicide poses as a kind of memorial.

“If you look at the work on memorials, a lot of that work is based on the idea that if somebody [felt] their life was unimportant and irrelevant, they become relevant in their death and that becomes a part of the motivation to die,” says Rudd. “That really is kind of the dangerous element of it, and that’s a part of what those platforms potentially can do.”

On social media, the viral spread of a suicide poses as a kind of memorial.

Another problem with trying to understand the impact of suicide on social media is that it’s not easily comparable to other phenomena. It is not the same as witnessing a suicide in person, whether walking down the street or in a more intimate setting, which can be a deeply traumatic event. It is also not the same as viewing information about a suicide through a news media source. A person can witness someone kill themselves unfiltered online, but the fact that they see it through a screen may make a difference in how it might affect them.

In a situation where you may see someone die by happenstance, Rudd says you may get very limited exposure depending on whether you were paying attention at the time. If a person didn’t really see it in that moment, certainly they couldn’t have the opportunity to rewind and play back the incident on loop. Online, one can do exactly that. “There’s opportunity for re-exposure and detailed re-examination and so the nature of that trauma would be very different,” says Rudd. These unique nuances matter in ultimately understanding how these videos have the potential to affect someone.

Rudd says there is a possible pathway to start approximating the affects of seeing a suicide online. Researchers could look at the long-term impact of media on a person’s emotions. The first step would be to look at how long a person retains feelings related to content that makes them laugh. Then, researchers could expose them to content that is slightly more upsetting, essentially graduating the trauma level of the content bit by bit all the while measuring the long-term emotional impact. Ultimately, researchers would have to stop short of exposing anyone to anything that might be considered unethical.

Rudd likens this idea to the ABC show What Would You Do?, which he says relies on the basic structure of social psychological research. The show uses actors to create emotionally charged situations to see how people react if they don’t think anyone was watching. Recent episodes include a cafe owner publicly reprimanding an employee for giving away free coffee to a homeless man and a man shopping in a New York grocery store wearing a Confederate flag. Such incidents can be upsetting, but they’re used to tell the audience something about human nature for entertainment purposes. Similar situations constructed inside a study could help researchers understand the way even mild trauma can have a lasting impact.

Building resilience

In the mean time, TikTok is advocating for social networks to come together to find a better way to prevent suicidal content from reaching the masses. Bryan, the psychologist from Ohio State University College of Medicine, notes that violence will inevitably go viral. While social networks should create better ways of identifying and removing horrifying content on their platforms, he says that it’s important to remember that there are resources available for people who may be feeling unmoored from seeing a traumatic event online. People can and do recover from experiencing trauma; it is hardly a life sentence. If anything, a benefit of the pandemic is that teletherapy is being marketed widely and is readily available through a variety of providers.

Bryan also says that we could better prepare people to handle unexpected trauma. In addition to mental health resources, he says, we could be pillaring our communities to be more emotionally supportive. He says parents and friends can uphold people who may be struggling with negative feelings in incredible ways. Rudd agrees.

“I will tell you that these issues would not be a serious concern if we had significant, meaningful discussions about them with kids in grade school, about what does it mean to be emotionally healthy? What does it mean to be resilient? . . . We don’t do that in our country,” says Rudd.

Dealing with traumatic content is as much in the hands of social networks as it is with society writ large, both point out. There is a public responsibility to hold social networks accountable for unnecessarily exposing people to traumatic events, even if we do not understand the impact. It is also important, perhaps more so, to prepare people to be resilient in the face of trauma as a matter of public health.

“It’s one of those things, no matter what the medium, what the forum, no matter where we go or what we do in life, there is a risk of exposure to unpredictable bad things,” says Bryan. “So the question is: What are the steps that we take to mitigate those risks?”

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Ruth Reader is a writer for Fast Company. She covers the intersection of health and technology. More


Explore Topics