Google is kicking off 2014 with some good old-fashioned privacy infringement. The search giant’s recent decision to link Gmail addresses to Google+ was met with considerable backlash among users who don't want their inboxes exposed to spam. But according to former Tumblr lead developer Marco Arment on his blog, we really shouldn’t be surprised at all:
To be clear, for anyone who thinks Google is some benevolent, selfless entity handing out free services to everyone out of the goodness of its heart: Google’s leadership, threatened by the attention and advertising relevance of Facebook, is betting the company on Google+ at all costs.
To that end, writes Arment, Google will do anything up to and including angering the users of its core products and services if that meant propping up Google+ against Facebook’s overwhelming dominance. But Arment may be missing the forest for the trees in his particular case against Google.
In a 2012 article for TechCrunch, writer Josh Constine argued that Google stopped caring about whether or not people used Google+ fairly early on. What mattered instead was that people were simply on it.
Google scrambled to build Google+ because it watched Facebook and saw users were willing to volunteer biographical data to their social network, and that data is crucial to serving accurate ads users want to click. Search keywords and algorithmic analysis of your Gmail and other content weren’t enough.
For all intents and purposes, Constine’s argument holds up in 2014. Allowing people to reach you via Google+ doesn’t necessarily mean users will engage with it any more—they’re not suddenly going to start posting content or leaving Facebook in droves. Perhaps instead it serves as another method for Google to nail down a clearer picture of who you are, pin you to a single account, wrapped neatly and tied with a bow.
But even though Arment may be off the mark as to why Google made such a move, he’s very right when he addresses the problem in our collective reaction. We’re still actively talking about Google in light of their old mantra: "Don’t be evil."
We need to stop. Immediately.
In fact, the notion of what is meant by that phrase has always been a slippery one when used in discussion about Google, but to Google itself, what constitutes as "evil" has always been clear: Evil is what Google says it is.
That’s how it was as far back as 2003, when writer Josh McHugh’s oft-cited and nigh prescient profile of the company was published by Wired. The relevant quote is of then-CEO Eric Schmidt, who, when asked to define evil, said it was "what Sergey [Brin] says is evil." In the context of McHugh’s story, the motto illustrated a paradox of sorts between an implied Tron-esque fight for the user and the burgeoning needs of a publicly traded company.
"The only reason anyone uses the word evil about Google," wrote Mat Honan for Gizmodo, "is because Google asked us to." Evil is subjective, argued Honan, and therefore was only useful vis a vis Google if it was discussed with the company’s definition in mind. But even in March of 2012, when Honan’s piece was published, it was quite clear that the company wasn’t living up to its own standard.
The foolishness of applying any sort of morality to the company was further echoed last October by Ian Bogost in The Atlantic. Bogost posits that the Google’s definition of evil is that which stands in the way of pragmatism and serviceability, operating under the assumption that everything Google does is a good thing for the world.
As for virtue, it's a nonissue: Google's acts are by their very nature righteous, a consequence of Google having done them. The company doesn't need to exercise any moral judgement other than whatever it will have done. The biggest risk—the greatest evil—lies in failing to engineer an effective implementation of its own vision. Don't be evil is the Silicon Valley version of "Be true to yourself." It is both tautology and narcissism.
Our problem, according to Bogost, is not that a company’s behavior is sociopathic—on contrary, Google is quite transparent in what it does—but that there is a dissonance between our interpretation of evil as 'wickedness" and the more practical engineering interpretation of evil that roughly equates to "impediment."
They do not mix, and we’ve bought into an illusory notion that our individual values align with that of a corporate entity that needs our data to remain profitable. Bear in mind that Google’s actions have stirred up enough regular controversy to fill a very long Wikipedia page.
And the next time Google, say, decides to buy an advanced robotics company, don’t ask if its ethics will survive. That’s beside the point. Google doesn’t care about you. It cares about your data. Act accordingly.