Here we are again. Once again, content control on YouTube is being questioned, and once again advertisers are pulling out over concerns that their brands are appearing alongside inappropriate content. This time, it’s how videos of minors are being used to create what one video blogger called a “soft-core pedophilia ring.”
YouTuber Matt Watson posted a video on Sunday detailing YouTube comments that were being used to identify and pass along videos of minors doing seemingly harmless activities like yoga, gymnastics, and dancing. Watson’s 20-minute video–now viewed more than 2 million times–demonstrates how commenters would identify specific time stamps to find sexually suggestive material and, if users clicked on one of the videos, YouTube’s algorithms recommended similar ones.
Of course, all throughout this disturbing process, brand ads appear right alongside the videos. As Wired UK detailed, such brands as Alfa Romeo, Fiat, Fortnite, Grammarly, L’Oreal, and Maybelline were among those found. As a result, Disney, Nestle, Epic Games, and others announced they were pulling ads from YouTube altogether. Sound familiar?
In early 2017, YouTube was lambasted over ads appearing alongside racist content and terrorist group videos, prompting Verizon, Johnson & Johnson, and AT&T, to name three notable brand marketers, to temporarily pull their advertising. There have been other incidents since then. But after a fair amount of time, and much industry hand-wringing over brand safety (some companies have even established permanent chief brand safety officer positions), most advertisers eventually made their way back to YouTube. The platform’s scale is simply too vast to ignore. Just last month, AT&T announced that it was heading back to YouTube after a two-year brand safety hiatus. Now the company tells the New York Times today, “Until Google can protect our brand from offensive content of any kind, we are removing all advertising from YouTube.”
Every minute, up to 300 hours of video content is uploaded to YouTube. That insanely big number represents the size and scale of both the opportunity and risk facing brand advertisers. Despite a lot of efforts by both YouTube and advertisers to improve brand safety on the platform over the last two years, with moves like more third-party measurement tools to track where ads appear, internal context-analysis technologies that can flag unsafe text and images, and constant purging of unsafe and bogus accounts, what this latest incident illustrates is that the fight against offensive and inappropriate content will always be an issue. There is no ultimate fix.
One executive at a major global media agency who agreed to comment on background said there is no such thing as 100% safety when it comes to user-generated content, and marketers need to know that although there can be a zero-tolerance effort, there’s no such thing as 100% brand safety or 0% risk.
For its part, YouTube has responded pretty quickly. Over the last two days, the platform has taken a more aggressive approach, beyond its normal protections, and disabled comments on tens of millions of videos that include minors; reviewed and removed thousands of inappropriate comments that appeared against videos with young people in them; terminated more than 400 channels for the comments left on videos; removed dozens of videos that were posted with innocent intentions but clearly put young people at risk; and reviewed and excised autocompletes that could have increased the discoverability of the offensive content that’s against its policies.
The media agency executive says brand ads on YouTube serve billions of impressions for marketers, so the fact that one or two appear on videos like those pointed out by Watson is awful, but considering it’s out of a few billion, it’s a ratio those marketers have to live with because they need YouTube to communicate to their audience. The key difference between now and 2017 is that today, both brands and YouTube can say they’re doing all they can to improve the situation. The executive likened it to the difference between choosing to walk in a high-crime neighborhood with a nice camera around your neck and $50 bills hanging out of your pockets and getting robbed, compared to getting robbed after staying in a well-lit, safe area and taking every precaution.
Loren Rochelle, CEO and co-founder of NOM, a video and data tech firm that focuses on brand safety and contextual ad targeting, says the decision for brands to leave YouTube is a very difficult one, but marketers are now coming to terms with the ongoing nature of brand safety. “This problem forces them to constantly weigh the risks of potentially running in front of objectionable content or risking millions of missed opportunities to engage their consumers,” says Rochelle. “People are starting to realize the problem won’t go away with any one solution. It’s like tending a garden–it needs constant and consistent care and attention.”
A survey conducted last fall and released just a week and a half ago by artificial intelligence and computer vision firm GumGum found that more advertisers are taking brand safety much more seriously than in 2017, and that platforms like YouTube are more open to third-party measurement, making it easier for agencies and brands to observe where their ads appear. However, the survey also found that marketing professionals ranked YouTube as the least brand-safe platform out of eight possible choices, behind Twitter, Facebook, Instagram, search, other publisher sites, LinkedIn, and Snapchat.
“It is in YouTube’s best interest to serve up safe environments for consumers and brands alike,” says GumGum president and COO Phil Schraeder. “But cleaning up social platforms that trade in user-generated content is not as simple as people want it to be. Remember that there’s another kind of scandal these platforms face, which is about censorship and freedom of expression.”
Brands have good reason to feel uneasy, according to an October 2018 study by cybersecurity firm Cheq and IPG Mediabrands, which found that consumers assume every ad placement is intentional, and are 2.8 times less willing to associate with a brand when its ads are displayed in unsafe environments.
In a statement, a YouTube spokesperson said, “Any content–including comments–that endangers minors is abhorrent, and we have clear policies prohibiting this on YouTube. We took immediate action by deleting accounts and channels, reporting illegal activity to authorities and disabling comments on tens of millions of videos that include minors. There’s more to be done, and we continue to work to improve and catch abuse more quickly.”
Schraeder says this latest incident is going to cost YouTube some business, but even though brands are going to be a little more skeptical toward whatever the digital video giant says it’s going to do to remedy the issue and ask for more ways to validate the brand safety, advertisers will return. “They’re going to come back because YouTube offers uniquely attractive levels of audience targeting capability and audience engagement,” says Schraeder. “What we’re all hoping is that brands and YouTube will together realize that they’ll make even more money if they get a solid grip on brand safety.”