The concept of brand safety has moved from a few backroom meetings to the forefront of any major advertiser’s priority list over the last few years. Simply put, brands don’t want the ads they’re paying a lot of money for to appear alongside offensive or dangerous content. Makes sense, right? Of course, this is not an exact science, as the laundry list of YouTube scandals and the work that activists like Sleeping Giants are doing will attest. It’s a never-ending battle against a constantly moving target, and because so many ads are bought and sold amid the nanoseconds of the programmatic system, brands have enlisted the help of algorithms to keep up with what content their ads are going to be placed alongside.
Perhaps the most common tool is a keyword blacklist, which prevents ads from showing up in content that features certain words. Of course this is easy when it comes to racial slurs or what the great George Carlin famously outlined as the Seven Dirty Words. What about Nazi, you may ask? Well, certain contexts would be undesirable, but what if it’s a piece on WWII history?
It’s this kind of nuance that a new report says is missing from the enforcement of major keyword blacklists, preventing perfectly safe content from generating revenue for publishers.
Brand safety company CHEQ researched keyword blocking used in 225 articles across 15 major sites, including CNN, the New York Times, the Guardian, and the Wall Street Journal on one single day—July 19, 2019—and discovered that 57% of neutral or positive stories are being incorrectly flagged as unsafe for brands. The most common keywords that caused an article to be blocked were dead, injury, lesbian, death, gun, sex, shots, and alcohol. But what was missing was context. They found one in five articles that included the word “death” were perfectly safe. A 1,700-word Bleacher Report article was blocked because it mentioned that an Arkansas Razorbacks player had an ankle “injury.”
The report also found that 73% of safe stories on LGBTQ web sites like PinkNews and the Advocate are also flagged as brand unsafe. PinkNews editor Benjamin Cohen told the researchers, “On the open marketplace basically a whole heap of our content gets blocked for no legitimate reason. A lot of ad networks are blocking content for the word ‘lesbian’ because they lazily think lesbian equals porn. I don’t think the issue is the brand, it’s whoever is administrating their block list, because often the brands are really surprised when you send them the block list.”
CHEQ CEO Guy Tytunovich says, as a result, LGBTQ content ends up getting denied ad dollars. “This is not done maliciously. This happens because many verification companies don’t have the technological capability to distinguish between positive LGBTQ content and potentially negative content like pornography or hate speech,” says Tytunovich. “So many times they ‘play it safe’ by blacklisting LGBTQ-related terms, and the collateral damage is that LGBTQ content creators are struggling to monetize.”
As culture becomes more divisive, brands are increasingly cautious. According to the Wall Street Journal, in the second quarter of this year, the number of advertisers working with ad measurement firm DoubleVerify that blocked ads from appearing on news or political content was up 33% from 2018 and more than double 2017’s total. Integral Ad Science said the average number of keywords the company’s advertisers were blocking in the first quarter was 261, with one advertiser blocking 1,553 words.
Not only does this type of keyword-based brand safety prevent marketing from reaching some of the exact audiences they’re paying a lot of money to target, but in the case of LGBTQ sites like PinkNews and the Advocate, it unintentionally works to stifle the voices of these communities. Of course, being an AI-led brand safety platform, CHEQ has a vested interest in these findings. But the research does illustrate how easily some good intentions can mess with the digital content economy. As Nucleus Marketing Solutions CEO Seth Rogin said in CHEQ’s report, “If you’re spending precious ad dollars to be around content, you’d better have a system that knows the difference between a shooting and ‘shooting a movie.'”
Tytunovich says advertisers should demand that simplistic keyword blacklists be put out of practice. “This will put more pressure on the industry to adopt smarter tech, which can understand the content contextually and make informed decisions, as opposed to bluntly blocking anything that might be hazardous,” he says.