Last night Google’s deep-learning expert François Chollet tweeted a long and slightly illuminating thread, decrying the state of artificial intelligence programming. In light of Facebook’s Cambridge Analytica debacle–where a data firm was able to obtain millions of user profiles–Chollet sees the increasingly frightening prospect of algorithms being trained to control information supply. While it’s scary that Facebook essentially allowed a Trump-affiliated program to execute a series of targeted ad campaigns, the hidden story is Facebook’s innate power and how it can be used for psychological control.
Indeed, this is a very interesting and horrifying problem. Platforms adopting algorithms to fine-tune the content it feeds to users presents an easy pathway for psychological control. “If Facebook gets to decide, over the span of many years, which news you will see (real or fake), whose political status updates you’ll see, and who will see yours, then Facebook is in effect in control of your political beliefs and your worldview,” writes Chollet.
But this criticism is somewhat rich coming from the deep-learning guy at Google. While perhaps we should pause, nod, and agree that this is a problem, Chollet may not be the most appropriate messenger. Google, after all, is a company that itself has created some very powerful algorithms, ones that people rely on day in and day out. I would go as far to say that Google’s artificial intelligence programs are far more insidious than Facebook’s.
Every time you search for something on Google, a piece of software is analyzing what you wrote and populating what it considers to be the most relevant result. This is exactly the feedback loop Chollet describes, whereby an algorithm helps dictate its end user’s worldview. While Google may seem a bit more cognizant of the weight of this challenge, that does not absolve it of its role as one of the main contributors to it.
Bad Actors Love Facebook And Google
Time and time again, we’ve seen Google’s algorithm manipulated by bad actors looking to promulgate misinformation. Like Facebook, Google has pledged to crack down on the problem, but many have wondered if either of the top platforms have done enough. Google introduced a “fact check” feature some time ago to highlight whether or not a result is true. Outside experts told Fast Company late last year that it seemed the company wasn’t doing enough to keep up with the problem.
Earlier this week, the company described changes to its search algorithm as part of its new Google News Initiative. “We’ve developed a more systematic approach,” said Google’s VP of News, Richard Gingras. The company has adjusted its software to “show results from more authoritative sources.” The two pillars Google considers most important when weighing its search algorithm, said Gingras, are “relevance and authoritativeness.”
Google-owned YouTube, however, has proven to be a wasteland of search results. Nearly every time a tragedy has occurred, a spate of videos touting fake news and conspiracy theories flood the top of YouTube search results. Only after numerous articles were written about this problem did the company seem to crack down on it. The issue, it seems, is that YouTube has a trending element to its results. So if there’s something that is catching people’s eyes–whether or not it is real or fake–it has a higher likelihood of being caught in the algorithm because it creates more engagement.
Google’s answer to this has been to add a news section above YouTube’s regular search results, as a way to show the “real” results. And yet, the algorithm often still surfaces this malicious content, which plays the exact same ideological-shaping role that Collet accused Facebook of playing.
This is all to say that it’s not enough for the software gurus who created this problem to point fingers and navel gaze. If Collet is so concerned about the hell that platform algorithms have wrought, maybe he should look inward, too.