advertisement
advertisement
advertisement

Google, you auto-complete me

At what point will Google’s predictive technology grow so powerful that we begin thinking of its personalized recommendations as our own?

Google, you auto-complete me
[Illustration: Delcan & Co.]

I don’t like to say “hi.” I’m a “hey” person. But more and more, I find myself greeting friends and colleagues with a “hi” on email. Why? Because Google suggests that I do. In May, Gmail introduced a new “Smart Compose” feature that uses auto-complete technology to predict my next words in gray. I accept them simply by hitting tab.

advertisement
advertisement

Words matter to me. I am a professional writer, after all. But then Gmail made it tantalizingly easy to say “hi” instead of “hey,” and Google’s prediction, albeit wrong at first, became self-fulfilling. It wasn’t until two weeks after I began using Smart Compose that I realized I had handed over a small part of my identity to an algorithm.

This sort of predictive technology is everywhere: Amazon suggests products aligned with your shopping history. Apple provides a special menu for the iOS apps you’re most likely to open next. Spotify tailors playlists to your musical tastes. And Facebook literally chooses the stories from friends you should see first, last, or never—then notifies you 365 days a year that it’s time to say “happy birthday” to someone out there.

Google, however, is the torchbearer when it comes to knowing what we want. It was personalizing ads when Zuckerberg was still in middle school, and auto-completing our searches before anyone attempted to sound out the acronym GDPR. At the Google I/O developer conference this past May, held in the company’s Mountain View, California, hometown, the search giant introduced a suite of new features that further eases us into autopilot. The Android Pie operating system, which began rolling out in August, doesn’t just suggest the app you might want to open next, such as Phone or Runkeeper; it offers the next action you might take, such as calling your mom or going for a run, based on your previous usage. (Since I/O, Google has shared another update, to Google Maps, that offers personalized ratings for restaurants and bars, predicting how much you’ll like each place.)

Then there’s Duplex, a forthcoming voice assistant that, in Google’s demos, was able to call a restaurant and negotiate a table with a humanlike personality that served as a surrogate for the user’s own. Its vocal fry and frequent “umms” were so uncanny that many in the media accused it of being faked, though when Fast Company recently tried the service, it seemed to work as advertised.

Duplex’s debut in May was met with applause by the company’s fanboy developers. Soon after, outside the conference’s cocoon, the implications began sinking in. These sorts of advancements may seem thrilling—or at least benignly helpful—at first. But what do they all add up to? At what point does Google’s power of suggestion grow so strong that it’s not about how well its services anticipate what we want, but how much we’ve internalized their recommendations—and think of them as our own? Most of the conversation around artificial intelligence today is focused on what happens when robots think like humans. Perhaps we should be just as concerned about humans thinking like robots.

“The irony of the digital age is that it’s caused us to reflect on what it is to be human,” says tech ethicist David Polgar, founder of the All Tech Is Human initiative, which aims to better align technology with our own interests. “A lot of this predictive analytics is getting at the heart of whether or not we have free will: Do I choose my next step, or does Google? And if it can predict my next step, then what does that say about me?”

advertisement

[Illustration: Delcan & Co.]
Polgar is currently collaborating on research with Indiana University that asks if internet communications are botifying human behavior. In the age of Twitter, chatbots, and auto-complete, he’s worried that “our online conversations are becoming so diluted that it is difficult to determine if [a message has been] written by a human or a bot.” Even more troubling: We may no longer care about the distinction, and our vocabulary and conversation quality are suffering as a result.

Ceding a “hi” for a “hey,” of course, is only a minor loss of individuality. My emails weren’t all that unique anyway, according to Lauren Squires, a linguist and associate professor in the English department at Ohio State University. “So . . . [Google] is going to create these new set phrases for us, and we’ll be locked in and never stray from them?” she asks with a laugh. “But we kind of do that anyway! I don’t want to underplay the creativity that goes into language, but a lot is dependent on scripts.” She points to my rote greeting to her at the start of our interview as an example. Squires herself uses Google’s AI-driven quick replies when emailing on her phone. “They’re not giving us new patterns; they’re encoding patterns that already exist,” she says. As for the subtle difference between “hi” and “hey,” she thinks synonyms are overrated. “Whether you have one word for chew or two words—chew and masticate—I don’t know if it’s better to have two words for that.” To Squires, much of our language is about function, not flourish.

Word choice unto itself may not always matter. The larger concern is how rapidly a user might alter his own behavior simply because of this single bit of Google’s user interface. “People inside larger tech companies like to say, ‘Online behavior is a mirror,’ ” says Polgar. “I disagree. The very fact that [these companies are] altering your environment and sending certain cues is inherently going to alter your behavior.”

Looking at a single weekend of emails and notifications on my phone, it’s almost nauseating to count all the apps telling me what to do. Twitter alerts me to people I ought to follow. Facebook urges me to read every single comment on a graduation-day post from an acquaintance I should probably just unfriend. LinkedIn spots a work anniversary and nudges me to “congratulate” my contact. Google wants me to take a photo and leave a review of a Starbucks, and Groupon tells me to redeem my deal for a two-for-one Taco in a Bag before it expires. This is what Tristan Harris, the former design ethicist at Google who cofounded the Center for Humane Technology, has described as the co-opting of our minds. “By shaping the menus we pick from,” he wrote in a 2016 essay, “technology hijacks the way we perceive our choices and replaces them with new ones.”

I’d like to believe that I’m immune to these messages—we all do—but our code is malleable. In 2014, Michigan State doctoral student Mijung Kim created a weather forecast app called Weather Story. She wanted to see if, over time, subjects who received push notifications from the app began opening it more often. Unsurprisingly, they did. But Kim also discovered that they began opening the app with increasing speed. It was as if their reflexes were being optimized to respond to the app.

Silicon Valley is just beginning to acknowledge that it may be pushing engagement too far. Apple’s upcoming iOS 12 will introduce a series of tools to track and limit your app usage, and even leverage AI to mute some push notifications. Google’s Android Pie system offers similar features and the option to turn your screen an unappealing gray at night. Even Instagram has recognized a phenomenon that it’s dubbed “zombie scrolling” and has rolled out a new interface to help users break the habit. Advertisers, after all, want their users engaged.

advertisement

Relying on tech companies to self-regulate will only get us so far. Google, for one, is all too aware of how it can affect user behaviors, at least according to a video it produced in 2016, which leaked in May via The Verge. Narrated by Nick Foster, the head of design at X, Alphabet’s moonshot factory, the “Selfish Ledger” is a thought experiment inspired by epigenetics, a pre-Darwinian understanding of genetics. Epigenetics proposed that an organism’s experiences accrue over time into a “ledger” of ingrained behaviors that is passed along to offspring. Picture it as DNA built from experiences.

In the era of big data, Foster imagines Google using digital epigenetics to cause the next societal revolution. “As gene sequencing yields a comprehensive map of human biology, researchers are increasingly able to target parts of the sequence, and modify them, in order to achieve a desired result,” he says. “As patterns begin to emerge in [users’] behavioral sequence, they too may be targeted. The ledger could be given a focus, shifting it from a system that not only tracks our behavior but offers direction toward a desired result.”

That “desired result” would be of Google’s choosing. In a grocery app, Foster explains, a user might be pushed to local bananas with a bright red notification—because Google values sustainability. Eventually, Foster suggests, the multigenerational data that Google collects could give it a “species-level understanding” to tackle societal topics like depression, health, and anxiety. What he doesn’t say: For Google to solve those problems, you’d have to hand over not just your data, but also your agency. The company has since distanced itself from the video, releasing a statement saying that the “Selfish Ledger” was created to “explore uncomfortable ideas and concepts in order to provoke discussion and debate. It’s not related to any current or future products.” But the potential for Google and other tech companies, such as Facebook and Amazon, to exercise this kind of power remains.

To prove his own existence, Descartes came up with the simple rule: “I think, therefore I am.” But if technology has divorced thought from action and turned consciousness into reflex, are we truly alive? I side with Descartes. The answer is “no.”

That is, unless Google suggests otherwise.

advertisement
advertisement

About the author

Mark Wilson is a senior writer at Fast Company. He started Philanthroper.com, a simple way to give back every day

More