With Evolved Brains, Robots Creep Closer To Animal-Like Learning

Get ready for four-legged bots of all shapes and sizes–and for all sorts of uses–that learn how to maneuver through landscapes with the grace of a cheetah.

With Evolved Brains, Robots Creep Closer To Animal-Like Learning

The most nightmare-inducing characteristic of Big Dog, DARPA’s robotic military mule, might be the way it moves so stiffly, yet unrelentingly, over treacherous battleground. Turns out the repetitive mechanical gait that calls to mind some coming robopocalypse is also a huge headache for Big Dog’s makers—and lots of the big thinkers behind walking bots envisioned for everyday domestic use.


Units like Big Dog move so awkwardly because of their rudimentary brains, which require pre-programming for every little action. A four-legged walking bot could jump smoothly over rocks or weave through trees with the fluid grace and reflexes of a cheetah—if it only had a better brain. One that was more animal-like. Thanks to breakthroughs in understanding how biological brains evolve, a team of robotic researchers say they’re close.

“We are working on evolving brains that can be downloaded onto a robot, wake up, and begin exploring their environment to figure out how to accomplish the high-level objectives we give them (e.g. avoid getting damaged, find recharging stations, locate survivors, pick up trash, etc.),” says Jeffrey Clune, Assistant Professor of Computer Science at the University of Wyoming, who is part of the robotics team.

The group began exploring the idea initially by evolving gaits for robots in an effort to reduce the time it takes to get them operational. Currently getting any robot to walk or perform other behaviors is extremely time-consuming for engineers. Not only must they manually program every movement, they have to reprogram them for new robots or different versions of the same robot. “The manual approach is too expensive and will not scale to produce many different types of robots,” explains Clune. Not to mention the resulting clunky and intimidating robo-swagger might work for the battlefield, but it’s not practical for, say, your future butler or home cleaner bot. The challenge is to get all sorts of robots to somehow learn to walk by themselves.

Clune and fellow team members Hod Lipson, Cornell Associate Professor of Mechanical and Aerospace Engineering and Cornell students Sean Lee and Jason Yosinski, began by combining neural networks with evolutionary concepts from developmental biology. Using the revolutionary new approach, they began growing artificial digital brains that could take a simulated or physical robot body, recognize the type of body (two-legged, four-legged, etc.), and evolve the neural patterns needed to control it. In its first test the software evolved digital brains with neural patterns to make a four-legged robot walk within a few hours. What’s more, instead of each leg doing its own thing, the walking patterns it came up with were coordinated and natural.

“Previously if you gave evolution a quadruped, and said ‘make it walk,’ it could do it, but it didn’t really understand that it had a four-legged body,” says Clune. “With developmental biology, it realizes the nature of its body, grows a brain that sees the four legs, creates similar neural wiring patterns for each leg, and thus produces regular gaits that have all four legs working together.“

The process is not an immediate cake-walk. Initially the group allowed the evolving digital brains to directly control the quadruped robot. This led to the robot breaking down numerous times because evolution tried crazy walking patterns. To improve matters the team let the brains evolve and control a body in simulation for hundreds of generations until they got the walking motion right and then transferred control to the actual robot.


“It’s a bit of a misnomer to think of this being the easy way of doing things”, says Bill Smart, Associate Professor of Computer Science and Engineering at Washington University, St. Louis. “Instead of having to write tricky code to control the gait, you have to write equally tricky code to learn a good gait.”

Clune agrees that is true for the first robot. But once you have the tricky learning code, he says, you can reuse it for as many robots as you like.

Right now the brains they can evolve are very small, with perhaps hundreds of neural connections focused only on locomotion. In nature, that corresponds to the size of brains of simple worms. However a recent breakthrough in understanding why biological brains are organized into modules could prove to be a game changer in the field of artificial intelligence and allow scaling computer brains to millions or billions of connections. For that discovery, Clune and Lipson teamed up with Jean-Baptiste Mouret, a Robotics and Computer Science Professor at Université Pierre et Marie Curie, Paris.

“Imagine how hard it would be to build and modify a non-modular car,” states Clune. “If the machinery of the locks were entangled with the functionality of everything else, when you improve how the locks work, you might break the transmission, muffler, and steering wheel. That sort of non-modular design is too entangled to work with. For decades in this field all we could evolve for the most part were entangled brains where everything depended on everything else.”

“We’ve had various attempts to try to crack the modularity question in lots of different ways,” says Lipson in a statement. “This one by far is the simplest and most elegant.” The discovery was outlined this week in a paper in the Proceedings of the Royal Society.

The breakthrough in understanding modularity will allow them and others to start evolving digital brains that are, for the first time, structurally organized like biological brains, composed of many neural modules. According to Mouret the merging of modularity with AI “makes a powerful tool to give robots the adaptation abilities of animal species.” If a robot breaks a leg or loses a part, it’ll learn to compensate and still operate effectively, just like animals do.


Grand dreams aside, what it means at present for the team is evolving brains that can go beyond figuring out simple things like gaits to more intelligent behaviors like learning. They’ve 3-D printed an advanced quadruped robot called Aracna, to further examine evolved gaits. The next step is to evolve larger, more modular brains that will hopefully approach natural brains in complexity opening up the possibility of creating an entirely new breed of robots.

“Evolutionary computation has already produced many things that are better than anything a human engineer has come up with, but its designs still pale in comparison to those found in nature,” states Clune. “As we begin to learn more about how nature produces its exquisite designs, the sky’s the limit: There’s no reason we cannot evolve robots as smart and capable as jaguars, hawks, and human beings.”

[Images: Cornell]