There's More To Google's Artificial Brain Than Finding Cats On The Internet

ebrain

Google's got a brain. An actual electronic brain.

The New York Times has news that inside Google's high-tech R&D "X" laboratory the search giant has been creating a simulation of the human brain. And rather than teaching it programs, Google's staff have been exposing it to information from the Net so that it learns organically, a little like the way we humans do. It's built by hooking together 16,000 processor cores with over one billion interconnections, in a notional model of the around 86 billion neurons in a typical adult human brain.

Some AI systems are all about code run in very fast computers, simulating the various layers of thought and decision that make up a mind with statistics or logic. But Google's approach is a bit closer to a natural model where the inspiration isn't some abstract algorithm for simulating a brain, but instead relies on building a replica of a brain and exposing it to raw information. Google has lots of information at its disposal.

In Google's case they did no complex training, but simply exposed the brain to around 10 million random digital pictures extracted as thumbnails from YouTube videos and let it do its own thing, adjusting signals from some neurons up and down and strengthening and weakening some of the connections between them. It's a concept well known to science fiction, and Douglas Adams even used it in The Hitchhiker's Guide To The Galaxy, to describe the Deep Thought super computer "which was so amazingly intelligent that even before the data banks had been connected up it had started from I think therefore I am and got as far as the existence of rice pudding and income tax before anyone managed to turn it off."

Google's brain, more or less undirected through a process of repetition, developed a "concept" of human faces and the different parts of a human body from these images, and also a concept of cats. "Concept" here means a fuzzy ill-understood pattern that it could use to categorize a new image it had not seen before, based on its previous learning. The cats concept was a surprise to the researchers, but given the fact that YouTube is a skewed data set, and that we humans do love Lolcats and their like, perhaps it was inevitable.

So what Google's done is develop a very simplified digital simulation of a human visual cortex. Given that such power is usually imagined as belonging to some military research facility, why's Google trying it?

The answer is manyfold. In some sense, it's a natural progression from much of the semantic web research Google's been doing—investigating how to best process and interpret really human, natural language inputs so that it can deliver even more relevant web search results. Google's Knowledge Graph is the most recent example of how powerful semantic search can be. The idea is that if you can better understand what someone actually means when they type data into Google, then you've got a better chance of actually delivering a matched set of answers in the search results. 

A more complete artificial intelligence is simply the successor to these systems, because it would be able to make a guess at the meaning of a search term like "how many roads must a man walk down?" far beyond merely matching the words to the famous song, perhaps guessing that the milage of metaled roads in the U.S. may be useful data, or even engaging in a little discussion about the meaning of life or even the stupidity of Homer Simpson. Though this is a frivolous example, think about how you sometimes have to trawl through hundreds of Google search answers to find the one you want because searching for it isn't straightforward. An AI search may well be swifter and more helpful.

But an AI trained like this would also make for an improved image recognition system, and also a much more astute voice recognition system. That could turbocharge the usefulness of search using text or imagery on your Android phone. And given what we know of Project Glass, Google's effort to get us all wearing augmented reality goggles, a future Glass system hooked up to an AI that recognizes what the wearer sees and what they're saying would seem an almost inevitable goal. Smarter AI could also help with Google's self-driving cars project, perhaps resulting in safer drives or more efficient journeys.

Ultimately you have to wonder if Google's system could plug into its Siri-like service, rumored to be codenamed Majel, to create a genuinely smart digital personal assistant. Fun though that is, we can also guess that Google would most likely use an AI for its own ends, to best work out what kinds of targeted ads to sell to its users.

[Image: Flickr user Saad Faruque, Google via New York Times]

Chat about this news with Kit Eaton on Twitter and Fast Company too.

Add New Comment

4 Comments

  • Neo

    seriously!? Google is doing this just to search a Giant Collection of text! I don't think so.

  • Tromprenard

    I disagree that abstract Mathematics is useless in designing a cognizant machine. It's the cloud that is abstract itself. A new personality will emerge but the key WAS a common starting point. This is much like finding a way to respond to a possible intelligent signal being detected by SETI.
    Some of us believe that it has allays been there ready to be discovered and some call it intelligent design.

    I just call it PEETEE

  • Javasd

    So google is attempting to create the johnny mnemonic version of the net, NRS anyone?