When Google co-founders Sergey Brin and Larry Page famously penned the “Don’t Be Evil” manifesto prior to their 2004 IPO, they probably couldn’t have anticipated how many times that same phrase would be trotted out and used to criticize Google’s data-collection methods and outsized influence online. But increasing scrutiny, especially after Edward Snowden’s NSA disclosures, doesn’t change the fact that Google rarely has to answer the question, “Hey, are you evil?” from the public.
On Wednesday, however, someone did put a version of that question in front of Corinna Cortes, the head of Google Research in New York. On a post-lunch KDD conference panel on data privacy and social good, an audience member asked, “What does ‘don’t be evil’ mean in a post-Snowden world?”
Cortes, a renowned machine learning data scientist, chuckled uncomfortably at first. To the best of my transcription abilities, here’s the rest of her answer below:
I think–we at Google [unintelligible] don’t be evil, or it’s don’t do evil, as far I recall. Well, you know, you cannot change who you are, but we should always continue with … the use of data and not do anything that in any way can compromise people. And I think we haven’t changed one bit, actually, after Snowden. We continue with our same policies for what you can do, into what permissions that you have in order to get access to the data. I don’t think we have seen any failures in our systems, fortunately. And we just march along. It hasn’t changed anything. We still uphold that even the most–people that have access to the most of the data–they can only look at it four hours at a time, so you can never [unintelligible] data in an account for a [longer] time period for us. Fortunately, for us, it meant no change.
Contrary to Cortes’s argument about security, other panelists pointed out that it’s not a data leak that many people fear, but how their data is being used to, say, target ads. Or how using data, even for good intentions, can go awry. As lots of other academics and watchdogs have noted in the past, how data is collected, repackaged, and sold is a ridiculously inscrutable process. It’s also constantly evolving. And Google has almost certainly played a major role in that evolution.
Cortes’s answer isn’t a satisfying one, but given Google’s massive presence, it would probably be difficult to arrive at something that would be. Evil, for what it’s worth, is pretty subjective. So is “social good” and “privacy.” Transparency, though, might be a better (and more tangible) metric. Google can collect real-time analytics on us, after all, but there’s almost no way to track how their decisions hold up or undermine their “don’t be evil” ethos. What’s trust without transparency?