advertisement
advertisement

TikTok is changing its algorithm to help you avoid doomscrolling harmful videos all day

The move comes in the midst of a public reckoning over social media’s potential toxicity for children and teens.

TikTok is changing its algorithm to help you avoid doomscrolling harmful videos all day
[Source Images: Qi Yang/Getty; Luis Quintero/Pexels]

TikTok will retool its algorithm to avoid harmful streams of negative or problematic content, the company said Thursday, presumably in an effort to combat surges of criticism over social media’s damaging effects on young users’ psyches.

advertisement
advertisement

According to TikTok, the video-sharing platform’s “For You” feed, which offers an endless flow of fresh content curated by algorithmic recommendations, was already designed to avoid repetitive patterns at the risk of boring users—for example, by ensuring that it doesn’t display multiple videos in a row from the same creator account. However, TikTok is now stepping it up by training the algorithm to recognize and break up patterns of content with the same negative themes, such as “extreme dieting or fitness” or “sadness,” the company wrote in a blog post. By doing so, it hopes to “protect against viewing too much of a content category that may be fine as a single video but problematic if viewed in clusters.”

“We’re also working to recognize if our system may inadvertently be recommending only very limited types of content that, though not violative of our policies, could have a negative effect if that’s the majority of what someone watches, such as content about loneliness or weight loss,” it added. “This work is being informed by ongoing conversations with experts across medicine, clinical psychology, and AI ethics.”

Currently, the company’s algorithm incorporates metrics like how long a user lingers over a piece of content to inform recommendations—which, as you might imagine, could cause users to spiral down the rabbit hole at the steep cost of their mental health.

advertisement
advertisement

TikTok’s move comes in the midst of a public reckoning over social media’s potential toxicity for children and teens, who are at their most impressionable ages in life. In September, the Wall Street Journal published an explosive trove on Facebook—now Meta—that claimed it knew its platforms were “riddled with flaws that cause harm, often in ways only the company fully understands.” One article in particular revealed that Instagram had negative effects on teen girls, and featured a 13-year-old girl who joined the app, was flooded with images of “chiseled bodies, perfect abs, and women doing 100 burpees in 10 minutes,” and eventually developed an eating disorder. “We make body image issues worse for one in three teen girls,” read a slide from a company presentation in 2019.

Social media executives are now being hauled into Congress to answer questions about the dangers of their products—including TikTok, in October—but research on TikTok is comparatively lacking. Similarly, much of the methodology behind its often frighteningly acute algorithms has been shrouded in mystery. However, a recent New York Times article—titled “How TikTok Reads Your Mind“—dissected a leaked document obtained from the company’s engineering team in Beijing called “TikTok Algo 101,” which suggested that algorithms optimize feeds to keep users in the app as long as possible, even if this means pushing “sad” content that could induce self-harm.

In its blog post, TikTok also revealed that it would let users banish specific words or hashtags from their feeds, which would help, say, a vegetarian who wants to avoid meat recipes, or a person with low self-esteem who wants to avoid beauty tutorials.

advertisement

The platform has more than a billion users, roughly two-thirds of whom are ages 10 to 29.

advertisement
advertisement