Like human intelligence, artificial intelligence is on a spectrum. At some still far-away endpoint is the self-interested sentience from science fiction. In between that and a simple gadget like a coffee maker is a huge breadth of complexity, like making sense of imperfect instructions, learning from mistakes, changing behavior in response to events without being explicitly told to do so, or–the mother lode–working cooperatively with other machines outside of a strict hierarchy.
MIT’s Computer Science and Artificial Intelligence Laboratory is tasked with solving problems like these. And they’re very close to creating a level of robot intelligence that would allow groups of flying robots to work together to obey our commands. We are entering the age of autonomous rescue helicopters.
“The applications could include newsgathering [of] video footage of
hard-to-explore areas: the top of mountains, the top of volcanoes, fire
situations,” explains Eric Feron, an AI researcher at Georgia Tech (formerly of MIT) in the video below. “Anywhere that would require very agile
maneuvers and very agile machines to operate.”
MIT postdoc Frans Oliehoek has been using teams of these helicopters to develop a general architecture for solving this problem of distributed intelligence. “What
you want to do is try and decompose the whole big problem into a set of
smaller problems that are connected,” Oliehoek says. “We now have some
methods that seem to work quite well in practice.”
MIT news officer Larry Hardesty summarizes some of the pressing issues in coordinating teams of rescue robots:
may be, for instance, that a robot helicopter trying to find a way into
a burning building is much less likely to get itself incinerated if it
makes two reconnaissance loops around the building before picking an
entry point than if it makes just one. So its policy isn’t as simple as,
“If you’ve just completed a loop, fly through the window farthest from
the flames.” Sometimes it’s, “If you’ve just completed a loop, make
another loop.” Moreover, if a squadron of helicopters is performing a
collective task, the policy for any one of them has to account for all
the possible histories of all the others.
“There is [also] an
aesthetic component to our research,” Feron adds, “which is to make such
machines as graceful as possible. What we would want is for this
machine to fly as well [as], if not better than, a bird.”
Here’s the genesis of the helicopters: In this video from 2008, AI researcher Feron and members of his team describe how an off-the-shelf helicopter uses a moded acceleration sensors, video cameras, and an internal gyroscope, both to relay information and respond in real-time with autonomous aggressive maneuvering.
Last year, aeronautical engineer Garrett Hemann demonstrated how a helicopter’s AI interprets a complex procedure defined in natural language:
And then last month, Oliehoek presented some of this research on swarming copters at the 10th annual International Conference on Autonomous Agents and Multiagent Systems (AAMAS) in Taiwan,
a high-profile venue that gathers researchers from all over the world
who are trying to tackle these problems, from visual object learning to
generalizable mathematical models for decision-making.
Huge interconnected databases coupled with lightning-fast military-derived vehicles might be setting off alarm bells for anyone who’s seen the Terminator movies more than once. But this isn’t Skynet. It’s better: machines using powerful, coordinated tools to solve problems that we define. After all, movies like Aliens–where human beings turn on each other in pursuit of their own narrow self-interest–always seemed like a more plausible dystopian future.
[Image (of just a regular toy helicopter that can’t respond to commands to do a front flip): Flickr user gracewong]