For the last three years, the Internet has been trying to get me to sell my eggs. I see the ad everywhere, the kind that offers female college graduates thousands of dollars for an invasive procedure to remove some of their reproductive ammo. Clearly, some kind of intelligence–human or algorithmic–knows I’m female, knows I went to college, and possibly knows I carry around a lot of student loan debt (and really need those thousands of dollars).
It’s like a shadow I can’t shake. The ad is targeted to some version of myself that remains obscured behind a zillion interactions between advertisers and data brokers, and I have no idea how much or how little of it is true–or how much of it I’d ever want anyone else to know.
So what do the rest of my ads say about me?
Last week, a handful of researchers began to study that question in a serious way. The Office for Creative Research, a data visualization outfit created by three New York Times alumni, along with Pulitzer Prize-winning data scientist Ashkan Soltani, created a browser extension called Floodwatch that allows individuals to track the variety of ads pinned to them. Install Floodwatch, and it’ll begin to collect details about your ads, like who publishes them and the type of content they advertise. After a couple of weeks with the browser extension, you’ll be able to see what kind of ads your online identity attracts. And with enough users, Floodwatch researchers hope to learn what kind of information about people informs these types of micro-targeted marketing decisions.
“We can make a lot of guesses as to what’s involved in advertising processes,” says Office for Creative Research cofounder Jer Thorpe. “There’s a deep suspicion from a number of people that there’s some pretty problematic discrimination practices happening in the ad industry, especially online. [Floodwatch] is a chance for us to reverse engineer the algorithms that advertisers are using and understand them a little bit better.”
By way of example, Thorpe cites the work of Federal Trade Commission chief technologist Latanya Sweeney. In 2012, Sweeney showed that Google searches on “black-sounding names” (like Latanya, for example) regularly pulled up ads on Google’s AdSense for bail bonds or criminal background checks. Non-ethnic-sounding names didn’t. “We know that the process is used to place these ads is differentiating,” Thorpe says. “The edge where that becomes an issue is when that discrimination happens because of race or gender or any of those types of criteria.”
For example, what if algorithms offered you certain discounts through ads that it didn’t offer others? What if those ads offered you a discount on chocolate bars because your browser history showed you were yo-yo-ing with diets? What if ads on a website implicitly revealed your sexual orientation, but you had no way of controlling when they might pop up?
Thorpe and his collaborators hope to reach an initial goal of 10,000 Floodwatch users to start processing what all that data could mean. They’re already 60% of the way there. Past that, Thorpe is aiming for 100,000 users–at which point discriminatory practices could be identified with more confidence.
“People don’t really know what’s happening, and I don’t think that’s fundamentally an ethical practice,” Thorpe says. “ We want to push against the ubiquitous corporate narrative around data, where [Big] Data’s always good, this data utopianism almost.”
Activist tool or not, Floodwatch is fun–and fascinating. “The best way to do it is install it, forget about it for two weeks, and come back to it,” Thorpe adds. “It’s a surprising exercise.”