How Google’s Robots Can Learn Like Humans

Fresh from being named our Most Innovative Company of 2014, and with AI firm DeepMind now in its stable, Google sees a future where technology learns from its mistakes.

How Google’s Robots Can Learn Like Humans
[Image: The Pattern Library/Natalia de Frutos]

Is this how Google becomes more machine than man?


Our freshly minted No. 1 Most Innovative Company of 2014 has, of course, an ongoing interest in robotics. That interest ticked up yet again recently when word emerged that it had been working with Foxconn, the Chinese manufacturing company long synonymous with Apple, to make Foxconn’s assembly lines “smarter.” How it might intend to do that, though, points to its most critical acquisition a few weeks ago, with Google’s $400 million purchase of a little-known artificial intelligence firm from the U.K. It is called DeepMind.

One of DeepMind’s cofounders, Demis Hassabis, possesses an impressive resume packed with prestigious titles, including software developer, neuroscientist, and teenage chess prodigy among the bullet points. But as the Economist suggested, one of Hassabis’s better-known contributions to society might be a video game; a niche but adored 2006 simulator called Evil Genius, in which you play as a malevolent mastermind hell-bent on world domination.

Indeed, Google appears to fortifying a fief within the robotics kingdom that–if it has its way–may allow it to do just that. Under Andy Rubin, who led the original Android project, Google has already acquired several futuristic startups over the last several months specializing in everything from industrial robot arms to spatial recognition software to sturdy AT-AT-like machines fit for the battlefield.

That presents a suddenly very sweet pot for Foxconn. And it also leaves us with some very interesting breadcrumbs into Google’s plans for the future, most of which hinge on the very technology DeepMind brings to the table: machine learning. (Google declined to comment for this article.)

So, what is machine learning, exactly? Stanford University computer science professor Andrew Ng defines it as “the science of getting computers to act without being explicitly programmed.” In fundamental terms, machine learning is a branch of artificial intelligence that is meant to replicate the way humans take in information from their environment to make better-informed choices for the future. Much of it is unconscious: If a kettle is scalding hot, for example, we recognize that touching it would not be a good idea.


Machine learning works the same way, and the technology already plays a critical role in nearly everything Google does. Search, for example, uses it to deliver you more personalized results based on your history; Google Now’s voice interface is built to familiarize itself with your unique speech patterns; and even YouTube applies machine learning across large datasets to more efficiently serve you funny cat videos.

DeepMind as a company, though, is shrouded in secrecy. Few people outside the U.K.-based startup are familiar with its core technology, although DigitalTrends’s Geoff Duncan notes that it does concern “bottom-up AI systems about complex concepts.” DeepMind’s acquisition will undoubtedly shore up some of the areas Google already competes within. From a user perspective, that means imperceptible improvements like less noise in search results, and maybe even predictive targeted-advertising for products you didn’t know you wanted to buy.

Mundane stuff, in other words. Which is why it is perhaps more exciting to imagine some of the ways advanced AI can be applied to Google’s future plans, especially when you take into consideration companies recently brought into its fold, like Nest and Boston Dynamics. “DeepMind raises the possibility of allowing robots to self-train by observing humans or other robots and possibly do the job even better,” Steve DeAngelis, CEO of Enterra Solutions, a company that specializes in cognitive reasoning platforms, told Fast Company. “This could drastically reduce the expensive programming time needed to deploy robots.”

Take Google’s fleet of driverless vehicles, for example. As a thought exercise, let’s pretend Google’s vehicles one day vie with UPS–and Amazon’s drones–to deliver your packages. (One can dream.) At the end of the day, let’s say that Nest sensors plotted throughout an urban community can roughly approximate when people are done commuting home, giving Google Maps an idea of when the roads are least congested, and more importantly, when customers are free to receive their orders. Google’s robotic delivery cars could use this data–and learn from it–to find the most efficient delivery window for each individual customer. “If a robot could learn a task by itself by watching experts, the ability to deploy robots quickly into a task at a low cost becomes more realistic,” said DeAngelis.

It’s mind-bending stuff. And you can see why competitors like Apple, Microsoft, IBM, and even Yahoo view advanced artificial intelligence as core to future infrastructure. Smart technology begets streamlined autonomy; streamlined autonomy means cheaper service charges, return customers, and, suddenly, a more-attractive bottom line. While Google clearly isn’t alone in its machine learning ambitions, it at least seems to be padding an increasingly cushy lead–which is a nice thing to have if your plans entail perfecting driverless cars or, say, one day conquering the world.

About the author

Chris is a staff writer at Fast Company, where he covers business and tech. He has also written for The Week, TIME, Men's Journal, The Atlantic, and more.