Remember, It’s Only a Movie …

Will scientists and engineers produce artificially intelligent humanoids like Spielberg’s David within the next 100 years? Will robots learn to accept and return human love? How far away is the future? Even the gurus disagree.

The celluloid fantasies of Steven Spielberg — tyrannosaurus theme parks, retaliatory great whites, and “little squashy” space invaders — have granted generations of moviegoers permission to imagine stranger, more thrilling realities for a quarter century. Though millions of would-be Elliots (myself included) wandered their backyards with flashlights and fistfuls of Reeses Pieces hoping to lure alien playmates during the summer of 1982, the American public likely never regarded a Spielberg illusion so believable as it did this weekend during the debut of A.I.


In the science-fiction epic, Spielberg channels his fairy-tale tendencies through the tangled, tainted ecosystem of Stanley Kubrick’s 2001: A Space Odyssey and infuses the atmosphere with the scent of plausibility. The tale begins roughly 100 years in the future, when professor Allen Hobby challenges a team of scientists and engineers from Cybertronics Manufacturing to improve upon the company’s core product: humanoid robots built for consumer purposes. The firm already does a thriving business in personal-assistant bots that clean house, care for children, and satisfy various corporeal desires. Hobby wants more. He wants to create an artificially intelligent robot that can “genuinely love with a mind.”

Hence David is created — an artificially intelligent boy who can emit and receive emotion from humans. He is a technological breakthrough who advances robot love from sensuality stimulators — preprogrammed physical reactions designed to convey anger, happiness, and sorrow — to bona fide subconscious emotion. Hobby says that he is able to produce this sentient child by “mapping the impulse pathways of a single neuron,” thereby creating a neural network for love.

And sitting in the theater watching David come to life, I believed Hobby — hook, line, and sinker.

Afterall, it is 2001. By Kubrick’s timetable, we’re way behind schedule on this robot stuff. By now, some version of the Jetsons’ housekeeper, Rosey, should be cleaning my toilet and packing my lunch while I sleep. A more congenial HAL 900 should be chauffeuring me through rush hour. And Lieutenant Commander Data should be serving as governor of California. Right?!

Determined scientists have been working to perfect robotics and artificial intelligence for more than 50 years. Four years have passed since Deep Blue dethroned Garry Kasparov, the best (human) chess player of all time. Surely those scientists must be close. Surely Spielberg’s forecast of a thinking, interacting, loving “mecha” (mechanism) can’t be that far away. Even world-famous author and inventor Ray Kurzweil says that machines will be able to understand, receive, and return love within the next 30 years. So what’s the problem? Where’s my mecha masseuse? My cyber shrink?

I called Jordan Pollack, associate professor of computer science at Brandeis University, to ask when I could expect delivery on my stove-scrubbing bot. His reply: Cool your jets.


Pollack, who made headlines late last year with the news that he’d hatched a batch of robots that could reproduce themselves, is one of the leading lights in the artificial-intelligence community. Despite enormous progress in the field, he says that Spielberg and Kurzweil have underestimated the complexity of merging artificial intelligence with robotics. Even if technology continues to advance in leaps and bounds, the restrictive costs of designing and assembling mechas that look and move like a human will keep them out of the consumer market for another 200 to 300 years. And then the result is anyone’s guess.

“The ATM machine was the first consumer robot, and look how we take it for granted today,” Pollack says. “Within a few hundred years, we may have simulacra that become so sophisticated that they move off the screen and into electronic puppet bodies. We will take those for granted as well, interacting with them as if they were human.”

In the meantime, the industrial applications for robots will grow almost as quickly as scientists’ ability to invent cheaper, smarter ways to automate business. Pollack is now studying the principles of evolution and is working to program small robots with algorithms inspired by biological development. These robots are designed to spawn increasingly customized robots inexpensively and without human intervention — but with the ability to pass the Darwinian test of survival.

Though Pollack’s robots will serve the industrial world, he says that cost-effective production methods will ultimately speed the arrival of humanoids in society. Still, other hurdles must be jumped. Now most robots being created at Brandeis, MIT, and other research facilities are not designed for cross-functional purposes. They are hand-coded by humans to perform specific tasks, such as assembling carburetors or churning through potential chess moves. They don’t progress and learn; they just do.

“Just because Deep Blue could beat Kasparov at chess doesn’t mean that Deep Blue could design a circuit board or vacuum your house,” Pollack says. “Most robots are built with vertical software that works for just one industry or task. Artificially intelligent robots must be able to operate on a horizontal plane across many industries and emotions.”

The first robot capable of natural language communication was introduced by MIT’s Joseph Weizenbaum in 1966. The “Eliza” program served as a Rogerian psychologist — responding to questions and problems according to a script that consisted of patterns and corresponding responses. If a user told Eliza, “I’m upset because my husband left me,” the chatterbot might respond, “Is that the real reason you’re upset?” or “Can you elaborate?” If you try typing a question into Ask Jeeves today, you will see that chatterbot technology has not advanced very far in the past 35 years.


Neither, it seems, has human behavior. Eliza hardly demonstrated emotion in its conversations, but that didn’t stop humans from forming personal attachments to it. Robot folklore says that Weizenbaum’s secretary grew dependent on Eliza as a personal counselor and became outraged when she found her boss reviewing transcripts of her conversations with the robot.

“The fact is, we can form attachments to interactive, pattern-generating, natural-language simulating computer programs that have no minds,” Pollack says. “Considering how some people feel about their cars and their Tamagachis, there is no doubt that humans could form emotional bonds with mechas like Steven Spielberg’s David. The question is whether mechas could really return that emotion with anything other than simulation.”

Right now, they cannot. The most advanced robot created at MIT’s Artificial Intelligence Lab, Kismet, is programmed to mimic human behavior — to copy, but not to create. Elsewhere, scientists are forming mathematical models based on neurons that will contribute to a “World Brain” — neural networks capable of human-level invention, discovery, and artistic creativity.

Still others are studying the physical manifestations of emotion and creating graphic interfaces on the Web that look and speak like two-dimensional people. (NativeMinds Inc. and LifeFX are just two.)

But so far, no scientific research suggests that machines can learn to feel emotion the way Spielberg’s David does. David not only demonstrates love through predictable physical actions (hugging, crying), he also feels emotions so strong that he’s motivated to wander the world in search of love. As Hobby says to the sentient child, “Robots didn’t dream or desire until you. You are the first of a kind.” In other words, David wasn’t faking it.

Few in the artificial-intelligence community agree with Kurzweil’s prediction that scientists’ understanding of the human brain will advance so much in the next 30 years that they will be able to replicate human thinking and feeling in a nonhuman entity. But many academics and companies are working on advanced simulation software that essentially aims to deceive humans. By mimicking the outward manifestations of human emotion, humanoid robots may walk and talk among us by 2301, Pollack says, but they will not replace us … yet.


“With better simulations and higher-precision manufacturing processes, we may be able to create a simulation of a human that is so precise that you may not be able to detect a difference between it and a man,” Pollack says. “Just because people can form an emotional bond with a robot doesn’t make it artificially intelligent. Real artificial intelligence is centuries away.”

Anni Layne ( is the Fast Company senior Web editor. Learn more about Jordan Pollack on the Web.