Remember search stories? With their first ad ever, aired during the 2010’s Super Bowl XLIV, Google depicted a world where it has become a regular part of the human experience–simply through search. Now, an ad campaign by UN Women has shown search in an altogether different light–a black mirror through which our darker thoughts and prejudices are reflected through an autocomplete window. But is this actually a fair representation of people’s searches?
“Parisian Love,” the original Google ad, was remarkably resonant and ripe for parody. Google even created a tool for users to create their own–one that’s since been taken offline since early this year. When parody Tumblr/Twitter Google Poetics launched in Fall of 2012, the idea of search suggestions had gone from an ad campaign gimmick to a full-blown meme.
However, what began as a sometimes funny, other times strange quirk about the idiosyncratic ways we all search the Internet, became something far more serious. Late last month the United Nations Entity for Gender Equality and the Empowerment of Women unveiled an ad campaign that uses the autocomplete suggestions to phrases like “Women shouldn’t….” or “Women need to…” in order to “expose negative sentiments ranging from stereotyping as well as outright denial of women’s rights.”
It’s a powerful, thought-provoking ad, one that brings into sharp focus a tool that we use every day while asking what that tool has to say about us. Over at the Guardian, writer Arwa Mahdawi argues that it isn’t good:
Google has become something of the secular equivalent of a confessional box. Within the confines of a search bar you can ask questions or express opinions you would never admit to in public. Our most popular searches are, to some degree, an uncensored chronicle of what, as a society, we’re thinking but not necessarily saying.
However, it matters how search algorithms actually work, and the autosuggestion results are a little more complicated than just acting as an “uncensored chronicle.”
The first big autocomplete quirk is that no two people are guaranteed to have the same search suggestions. Your results, as Google states, depend on a number of factors: where you’re located, whether or not you’re signed in, your search/web history, et cetera. Those same policies state that Google will remove both hate speech and pornographic completions (you could search for them anyway, that’s just to say that autocomplete won’t help you). Also, don’t forget about “freshness”–the priority autocomplete gives to recent popular searches that will arise based off of current events or trending topics.
In a post for Slate, David Auerbach breaks down the known quirks of Google’s search algorithms while arguing that while worth noting, the story doesn’t end with autocomplete suggestions, but what we actually search for, and the actual results that are displayed. “Of the top results that aren’t about the UN Women ad campaign,” Auerbach notes, “not one of them unequivocally promotes an anti-woman position.”
So, are autocomplete algorithms really a barometer for public opinion? Writing a post which reports on sentiments like those from the UN Women ad campaign could actually feed into the “freshness” heuristic used by Google, making it more likely for those words to pop up in autocomplete–even if you don’t agree.
Auerbach makes note of one important workaround: the “rel=nofollow” attribute for links, which makes sure search engines don’t associate your site with a target link’s rankings. Since, as Auerbach says, “Google does not distinguish an approving link from a disapproving link.”