Fast company logo
|
advertisement

The loophole, since patched, isn’t the first time Facebook has come under criticism for giving troubling amounts of freedom to advertisers.

Researchers Discovered Data Leak In Facebook’s Ad Software

[Photo and illustration: rashadashurov/iStock; dolphfyn/iStock]

BY Steven Melendez3 minute read

A loophole in Facebook’s advertising targeting mechanism could have let attackers obtain users’ phone numbers after they visited websites the attackers controlled, a group of scientists revealed in a paper presented last week.

Facebook, which awarded the researchers a $5,000 bug bounty, has since taken steps to thwart similar attacks, and neither the company nor the researchers say they have any evidence the technique was ever used maliciously.

The potential attack, presented by researchers from Northeastern University and institutions in France and Germany at the Federal Trade Commission’s PrivacyCon, exploits the way Facebook allows advertisers to target ads to custom audiences. Those can be built based on users’ interests, visits to a particular website, email addresses, phone numbers, or other factors known to the social networking company.

Facebook and its rival social networks allow advertisers an essentially unparalleled degree of freedom in automatically targeting messages to particular people based on their interests and demographics. But those liberal advertising policies have come under fire in recent years, with critics saying they enabled everything from racial discrimination and hate speech to surreptitious Russian propaganda.

In this case, though the system is designed not to let advertisers learn the identities of users based on information they don’t make public, the researchers realized that ad audiences built based on the combination of different factors—say, a list of phone numbers and a list of email addresses—would only include each user once. That meant that the number of users in a cleverly built audience could reveal whether such pair of lists had any duplicates, which would indicate that a phone number from one and an email address from another belonged to the same user.

“Facebook does the smart thing and says, oh, both of those refer to the same user, so I’m only going to increase the number by 1,” says Alan Mislove, an associate professor of computer science at Northeastern, and one the paper’s authors.

Even though Facebook didn’t explicitly provide the exact number of matches, and rounded the total number of people in the combined ad audience, the scientists essentially found they could detect whether adding a pair of identifiers potentially belonging to the same user caused the rounded total number of matches to increase, indicating a match.

The company responded initially by suppressing size estimates for audiences created with multiple sets of information, such as email addresses and phone numbers, according to the paper. The company later restored some information after taking other safeguards, a spokesperson says.

“We’re grateful to the researchers who brought this issue to our attention,” a Facebook spokesperson tells Fast Company in an email. “We didn’t see any abuse of this complex technique, and have restored reach estimation on a limited basis now that we have added appropriate safeguards against potential abuse.”

Previously, Facebook has announced other changes to its ad targeting mechanisms to block other sorts of abuse. After Aleksandra Korolova, now an assistant professor of computer science at the University of Southern California, found in 2010 that sneaky advertisers could target ads to particular individual users and potentially extract private data about them, the company took steps to make that impossible.

advertisement

And last fall, after ProPublica discovered the social network allowed advertisers to target people with explicitly racist interests like “how to burn Jews,” the company limited what advertisers could target.

“The fact that hateful terms were even offered as options was totally inappropriate and a fail on our part,” wrote chief operating officer Sheryl Sandberg at the time. “We removed them and when that was not totally effective, we disabled that targeting section in our ad systems.”

ProPublica has also on multiple occasions discovered ways advertisers could potentially exclude users of particular ethnic groups from seeing their ads, which could violate long-standing anti-discrimination laws.

“This was a failure in our enforcement,” Rob Goldman, the company’s vice president for ad products, said in a statement last November. “We must do better.”

Facebook’s overall approach of showing ads to only users meeting certain criteria has also been critiqued for making it difficult for outsiders to know what paid messages are being delivered through the network. For instance, during the 2016 election cycle, Russian-affiliated groups were allegedly able to run more than 3,000 ads on Facebook, as well as unpaid content on that network and Facebook-owned Instagram.

Facebook has since added workers to review ads, as well as tools to make it easier to see what ads a particular page on the network is running.

“To provide even greater transparency for people and accountability for advertisers, we’re now building new tools that will allow you to see the other ads a Page is running as well–including ads that aren’t targeted to you directly,” wrote Joel Kaplan, VP for global public policy, in October.

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Steven Melendez is an independent journalist living in New Orleans. More


Explore Topics