Fast company logo
|
advertisement

CO.DESIGN

The attempted coup at the Capitol needs to be brands’ wake-up call about funding online disinformation

A significant lack of control in digital advertising makes brands vulnerable to funding disinformation inadvertently and defunding legitimate news.

The attempted coup at the Capitol needs to be brands’ wake-up call about funding online disinformation

[Photo: Tasos Katopodis/Getty Images]

BY Jeff Beer5 minute read

As chaos erupted at the Capitol in Washington, D.C., on Wednesday, brands quickly hit pause on ongoing advertising and marketing campaigns and scaled back their social media efforts. As with any major news event, marketers need to read the room and know when, say, Dilly Dilly may not exactly hit the right note.

As Northwestern University marketing professor Tim Calkins told Ad Age, “If you are a brand and you were planning some very lighthearted effort in the next couple of days, you might want to relook at that because that is not going to connect with the tonality of the country.”

But while many marketers are wringing their hands about how to respond to the events of one day this week, what they really should be using this time to do is reexamine how and where their brand is appearing and, more important, what it’s funding through its advertising dollars the other 364 days a year. Such outlets as One America News, Newsmax, and Fox News, not to mention scores of YouTube channels and Facebook groups, have been promoting election fraud conspiracy theories over the past two months that helped provide the fuel for Wednesday’s events.

Consumer activist Nandini Jammi, cofounder of Sleeping Giants and Check My Ads, says that marketers have spent entirely too much time worrying about what real news their ads appear alongside—and a shocking lack of time concerned with whether their ads are funding disinformation online.

“A lot of brands that are talking to their agencies about blocking their ads from content related to [this week’s] events have no idea where their ads have been running on the internet,” Jammi says. “Ad-funded disinformation has been [made] possible by companies that don’t know where their ads are running, and removing your ads for one day because people are paying attention is one of the most ineffective things to be doing at this moment.”

Joe Biden’s presidential victory was finally confirmed officially this week, but there was already a clear winner back on Election Day, November 3: Calm. Specifically, the sleep and meditation app that had the inspired idea to sponsor CNN’s “Key Race Alerts” throughout the night and run a 30-second spot during that coverage which featured nothing but rain falling on leaves. It was a unique opportunity turned into an absolute home run. According to the analytics company Talkwalker, the brand saw a 248% increase in Twitter mentions from the previous day, and earned media reactions the following day declaring it the brand winner of election night.

Fast-forward to the pro-Trump mob that broke into the Capitol building in Washington, D.C., on January 6. Elijah Schaffer, a host on the conservative media outlet BlazeTV, followed what he gleefully called “revolutionaries” inside, posting video of violent clashes, as well as photos from inside House Speaker Nancy Pelosi’s office, including one of her computer desktop.

One of the brands sponsoring his YouTube channel? Calm.

The brand quickly responded, saying, “We do not sponsor that account. Our ad was placed there by YouTube and we are working with them to remove our ads from it.” The situation illustrates perfectly the challenges brands face when it comes to knowing exactly where their logo and ads are going to show up online.

It all seems simple at first glance. If a brand doesn’t want its ads to show up next to certain types of content, it creates a list of words that describe such content and an algorithm will automatically filter them out. But what if that algorithm can’t seem to tell the difference between content that’s racist and content that’s reporting news on racism?

Programmatic advertising technology places millions of ads, and an industry has popped up within that ecosystem promising marketers peace of mind by using algorithms to program keyword block lists that purportedly control where these ads appear. But according to findings by Adalytics.io founder Krzysztof Franaszek, Mastercard, for example, had words like racist, racism, white nationalist, discriminated, protest, and discrimination on its block list. Problem is, the technology routinely blocks all instances of these words without context. So real news reporting on these important issues gets blocked along with any content that’s actually racist or discriminatory.

So as brands no doubt race to add coup and insurrection to their block lists, it will likely deny ad revenue to legitimate news outlets reporting on this week’s events. Last year, a study by the University of Baltimore found that publishers lost out on up to $3 billion in digital ad revenue due to “over-blocking,” or stopped ads appearing in articles with “safe or neutral content.”

advertisement

Recent research by the Interactive Advertising Bureau reported that brands that advertise within news are more likely to see increases in consumer perception or other positive attributes. But Dave Grimaldi, IAB’s executive director of public policy, says organization members are asking if American sentiment is going to shift when government leadership changes. Is that going to restore more trust in news? He asks, “Does that trickle down to ad placement and brands being less gun-shy about the news sites they advertise on?”

Jammi says, “This idea of brand safety online is unique, because it seems to be pushed by one industry that has an interest in ad targeting of this type. If Anderson Cooper was subject to brand safety technology, he would never be monetizable, because he talks about current events every day. If he uttered the word coronoavirus, he’d never see an ad.

“In traditional TV advertising,” she continues, “people use common sense, context. They look at the reputation of the organization and host, the audience demographics, and make a decision whether it’s appropriate to advertise there.”

Another challenge in digital advertising, and a limitation in keyword blocking, appears to be the inability to distinguish consistently and accurately between disinformation and news. Platforms like Integral Ad Science, and Oracle-owned Moat and Grapeshot, have risky content categories like violence, crime, and adult content, but don’t appear to have one for disinformation. I reached out to Integral Ad Science to ask for more details about how they approach disinformation, but have not received a reply as of press time.

One solution is for brands simply to cut down on the number of sites on which they advertise, making it possible to be more vigilant. In 2017, Chase Bank cut the number of sites its ads appeared on from 400,000 to just 5,000 and found no difference in its advertising performance metrics.

Jammi says that without an accurate filter for disinformation, there is no such thing as brand safety. “It means they’re treating disinformation exactly like news,” she says. “According to our research, OAN and Hannity are rated more safe than The New York Times. So what is this technology for? Why would you use it?”

Ultimately, brands need to be aware that they are responsible for where their logo shows up and what their ad budgets fund—not just during one bad news day or week, but every single other day and week of the year.

CoDesign Newsletter logo
The latest innovations in design brought to you every weekday.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

Explore Topics