This Week In Bots: The Robots We Build To Kill For Us

A classic science-fiction plot element may soon invade real life.

This Week In Bots: The Robots We Build To Kill For Us

Bot Vid: Inflatable-Arm Army Robot

iRobot’s been working with DARPA on what seems like a very unlikely piece of military hardware: A warbot with an inflatable arm. One would think that inflatables are incompatible with a battlefield, but the design the robot makers have pursued means the arm can be completely collapsed into its body when not needed, improving maneuverability. AlRarm is, the Automaton blog tells us, powered by pumps, actuators, and strings and is much lighter than the typical PackBot jointed arms. That also lightens the burden of soldiers who have to carry the robot into battle. While for now it’s decidedly military, there’s always the hope tech like this can help robots in post-disaster scenarios.


Bot Vid: Stanford’s Robot Speeder

Stanford has been working on its robot Audi car, Shelley, for several years now, but recently let it rip on Thunderhill Raceway in California. It’s a very different effort from Google’s self-driving cars because it has a very different goal: It navigates by GPS but lacks the Google cars’ sophisticated external sensors. Instead it’s packed with sensors and computers that are trying to optimize the performance of the car itself, with the goal of teaching the system to operate on the edge of the car’s physical abilities. Hence the Thunderhill experiments. Stanford’s goal is to develop a robot “safety driver” that can take over–and perfectly control–a car that, under human control, has suffered some kind of near-disaster or a control-loss situation.

Bot Vid: NASA’s Robot Mars Bore

Just as the Curiosity rover gets its first short roll on Mars, NASA’s just revealed its next robotic mission to the red planet. The InSight mission will cost $425 million and will try to discover the secrets of Mars’ interior by drilling thirty feet into the soil and performing seismic experiments to peer inside Mars. It’s due for launch in 2016.

Bot News

Honda’s Robot-Mower. Honda, famous in robot circles for its amazing android Asimo, just used Asimo as a promo tool for its newest robots: lawnmowers. Like a Roomba for your grass, the robot mows in random patterns and just cuts a little bit at a time because it’s designed to go and mow a couple of times a week. It even has a vacuum system to draw grass toward its blades. Depending on options, the Miimo robots will cost up to $3,000 when they hit Europe in 2013.

The Robot Hall Of Fame. Carnegie Mellon university created the Robot Hall Of Fame in 2003, and has honored 21 robots from sci-fi classic C-3PO to the Mars rover Sojourner. This year the new entry is open to public voting, with website visitors able to choose from three nominees in four categories that range from Entertainment to Research. Voting’s open until September 30.

Baby-Driven Robots. Researchers at Ithaca College in New York have made a breakthrough in robotics by designing and building their own mobility robot for disabled babies. Doctor Who fans will spot a similarity to Dalek baddy Davros, but while Davros and earlier mobility robots used joysticks, the new machine detects the baby’s natural leaning motions and moves accordingly. The WeeBot is used by babies as young as five months and is designed to foster learning and growth in children with impaired mobility.


Bot Futures: Robots And Their Right To Open Fire

While Isaac Asimov’s famous (fictional) robot laws are all about protecting human life, it seems that at an ever-faster rate we fragile humans are actually using our robots to wage war in real life or deliver police authority from the sky.

That’s prompted an interesting blog posting this week at the Wall Street Journal. Referencing Asimov’s three laws, the blog asks “should robots have a license to kill?” It’s a key question because we actually are giving some of our robots weapons that are trained on humans, and there’s rarely a couple of weeks that go by without mention of the “drone war” on targets in the hills of Afghanistan or Pakistan. Drones work well in these inaccessible regions and can loiter in the area before being commanded to fire on a target they’ve detected with their sensors. Yet it’s a tactic that doesn’t always result in the right outcome.

The command authority in the weaponized drone system is important, however. A loitering drone may navigate itself and collect and transmit reams of data. But it cannot decide to fire its weapon alone. That decision is made by the traditional military command chain, chock-full of people.

There is a military robot in service that’s super-smart, critically important, and has the authority to both identify threats and automatically decide to open fire to destroy them. There’s one reason it’s empowered to do so: It needs to operate so swiftly that a man-in-the-loop system wouldn’t work. It’s called Phalanx, and it’s been protecting U.S. Navy assets (and other nation’s assets on land and sea) for several decades now. It’s a close-in weapons system that’s the last line of defence for rapidly emerging threats–like a sea-skimming missile–in the very near area around a ship or army base. Using radar and other sensors, it detects a threat, maneuvers its Vulcan Gattling gun onto the target with speed and precision and destroys it with a hail of 20mm rounds. It operates almost completely automatically when it’s on duty, and it independently assesses if it’s made a “kill.” It, too, has accidentally engaged the wrong target and has resulted in damage and deaths.

Phalanx is a tightly constrained system that’s used in limited environments against a very real threat. It does, however, demonstrate that an automatic robot that’s enabled to “kill by algorithm” is in action. The logical leap to giving drones the ability to fire upon targets at will isn’t that big.


But should we give drones this power? Would we tolerate headlines about the accidental shooting of a family on the other side of the world by an intelligent drone? Would the national consciousness challenge the wisdom of putting guns on police drones when they’re used to suppress protests or accidentally shoot an innocent person? Would we blame the police for such a mistake, or the company who made the drone, or perhaps even the person who designed its “license to kill” algorithm?

Earlier this month, a U.S. court okayed the use of drones for police purposes inside U.S. airspace. The ACLU alleges that Congress is pressing ahead with domestic drone regulations, despite questions of legality and public interest and the very real threat of hacking. Some elements in Congress are, however, wary of the idea. Thus far the drones are largely passive surveillance systems, but at the incredible rate of development it seams reasonable to assume an armed drone–even one without a license to fire automatically–will be deployed over U.S. soil. And even if the U.S. doesn’t make this move, you can bet that other nations will.

Drone authority decisions will be talked about and made at political, police and military levels soon enough. Our expertise in drone technology is growing daily, and the weapons themselves are becoming ever more potent. It won’t be on the presidential agenda next year, but in four years time the situation may be very different.

[Image: Flickr user anhonorablegerman]

Chat about this news with Kit Eaton on Twitter and Fast Company too.

About the author

I'm covering the science/tech/generally-exciting-and-innovative beat for Fast Company. Follow me on Twitter, or Google+ and you'll hear tons of interesting stuff, I promise. I've also got a PhD, and worked in such roles as professional scientist and theater technician...thankfully avoiding jobs like bodyguard and chicken shed-cleaner (bonus points if you get that reference!)