Fast company logo
|
advertisement

CO.DESIGN

Targeted ads aren’t just annoying, they can be harmful. Here’s how to fight back

Online targeted advertising divides and isolates us. It’s time to restore the role of consumers as active participants in regulating online advertising.

Targeted ads aren’t just annoying, they can be harmful. Here’s how to fight back

[Source Images: nadia_bormotova/iStock]

BY Suzanne LaBarre5 minute read

Five years since the Brexit vote and three since the Cambridge Analytica scandal, we’re now familiar with the role that targeted political advertising can play in fomenting polarization. It was revealed in 2018 that Cambridge Analytica had used data harvested from 87 million Facebook profiles, without users’ consent, to help Donald Trump’s 2016 election campaign target key voters with online ads.

In the years since, we’ve learned how these kinds of targeted ads can create political filter bubbles and echo chambers, suspected of dividing people and increasing the circulation of harmful disinformation.

But the vast majority of the ads exchanged online are commercial, not political. Commercial targeted advertising is the primary source of revenue in the internet economy, but we know little about how it affects us. We know our personal data is collected to support targeted advertising in a way that violates our privacy. But aside from privacy considerations, how else might targeting be harming us – and how could these harms be prevented?

These questions motivated our recent research. We found that online targeted advertising also divides and isolates us by preventing us from collectively flagging ads we object to. We do this in the physical world (perhaps when we see an ad at a bus stop or train station) by alerting regulators to harmful content. But online consumers are isolated because the information they see is limited to what is targeted at them.

Until we address this flaw, preventing targeted ads from isolating us from the feedback of others, regulators won’t be able to protect us from online ads that could cause us harm.

Due to the sheer volume of ads exchanged online, human supervisors cannot vet each campaign. So increasingly, machine learning algorithms screen the content of ads, predicting the likelihood that they may be harmful or fail to conform to standards. But these predictions can be biased, and they typically only ban the clearest violations. Among the many ads that pass these controls, a significant portion still contain potentially harmful content.

Traditionally, advertising standards authorities have taken a reactive approach to regulating advertising, relying upon consumer complaints. Take the 2015 case of Protein World’s “Beach Body” campaign, which was displayed across the London Underground on billboards featuring a bikini-clad model next to the words: “Are you beach body ready?” Many commuters complained, saying that it promoted harmful stereotypes. Shortly after, the ad was banned and a public probe into socially responsible advertising was launched.

Regulating ads

The Protein World case illustrates how regulators work. Because they respond to consumer complaints, the regulator is open to considering how ads conflict with perceived social norms. As social norms evolve over time, this helps regulators keep up with what the public considers to be harmful.

Consumers complained about the ad because they felt it promoted and normalized a harmful message. But it was reported that only 378 commuters raised complaints with the regulator, of the hundreds of thousands likely to have seen them. This raises the question: what about all the others? If the campaign had taken place online, people wouldn’t have seen posters defaced by disgruntled commuters and they may not have been prompted to question its message.

What’s more, if the ad could have been targeted to just the subset of consumers most receptive to its message, they might not have raised any complaints. As a result, the harmful message would have gone unchallenged, missing an opportunity for the regulator to update their guidelines in keeping with current social norms.

Sometimes ads are harmful in a specific context, as when ads for high-fat-content foods are targeted to children, or when gambling ads target those who suffer from a gambling addiction. Targeted ads can also harm by omission. This is the case, for example, if ads for shoes crowd out job ads or public health announcements that someone might find more useful or even vital.

advertisement

These cases can be described as contextual harms: they’re not tied to specific content, but rather depend on the context in which the ad is presented to the consumer.

Machine learning algorithms are bad at identifying contextual harms. On the contrary, the way targeting works actually amplifies them. Several audits, for example, have uncovered how Facebook has allowed discriminatory targeting that worsens socioeconomic inequalities.

Digging deeper

The root cause of all these issues can be traced to the fact that consumers have a very isolated experience online. We call this a state of “epistemic fragmentation,” where the information available to each individual is limited to what is targeted at them, without the opportunity to compare with others in a shared space like the London Underground.

Because of personalized targeting, each of us sees different ads. This makes us more vulnerable. Ads can play on our personal vulnerabilities, or they can withhold opportunities from us that we never knew existed. Because we don’t know what other users are seeing, our ability to look out for other vulnerable people is also limited.

Currently, regulators are adopting a combination of two strategies to address these challenges. First, we see an increasing focus on educating consumers to give them “control” over how they’re targeted. Second, there’s a push toward monitoring ad campaigns proactively, automating screening mechanisms before ads are published online. Both of these strategies are too limited.

Instead, we should focus on restoring the role of consumers as active participants in the regulation of online advertising. This could be achieved by blunting the precision of targeting categories, by instituting targeting quotas, or by banning targeting altogether. This would ensure that at least a portion of online ads are seen by more diverse consumers, in a shared context where objections to them can be raised and shared.

In the wake of the Cambridge Analytica scandal, efforts were made by The Electoral Commission to prize open the hidden world of targeted political ads in the run up to the UK’s 2019 election. Some broadcasters asked their audience to send in targeted ads on their social media feeds, in order to share them with a wider audience. Campaign groups and academics were able to analyze targeting campaigns in greater detail, exposing where ads could be harmful or untrue.

These strategies could also be used for commercial targeted advertising, which would break the epistemic fragmentation that currently prevents us from collectively responding to harmful ads. Our research shows it’s not just political targeting that produces harms – commercial targeting requires our attention too.The Conversation

Silvia Milano, Postdoctoral Researcher in AI Ethics, University of Oxford; Brent Mittelstadt, Research Fellow in Data Ethics, University of Oxford, and Sandra Wachter, Associate Professor and Senior Research Fellow, Oxford Internet Institute, University of Oxford

This article is republished from The Conversation under a Creative Commons license. Read the original article.

CoDesign Newsletter logo
The latest innovations in design brought to you every weekday.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Suzanne LaBarre is the editor of Co.Design. Previously, she was the online content director of Popular Science and has written for the New York Times, the New York Observer, Newsday, I.D More


Explore Topics