This Week In Bots: The Pentagon Doesn’t Trust Robots, But Your Grandma Does

Wherein old schoolers school us on the good that bots can do.

This Week In Bots: The Pentagon Doesn’t Trust Robots, But Your Grandma Does

Bot Vid: Big Dog Quiets Its Engine Bark

Boston Dynamics’ Big Dog/LS3 project is already one of the most interesting pieces of military robotics out there, and the company just revealed more info on its latest hardware iteration. The robodogs are now more agile and can step over awkward barriers including backwards, they can roll over if they fall, and they’re able to run between a slow 1 mph crawl to a 5 mpg jog–aiming ultimately at a top speed of about 7 mph. And, as a bonus, the engine roar of the earlier prototypes is much reduced.


Bot Vid: The Gentlest Robo-Tentacle

Soft robots are almost the opposite sort of device than Big Dog, and Harvard University has been leading the charge. It’s just demonstrated a new soft robotic tentacle that is so agile and yet gentle that it can wrap around a flower to pick it up without crushing it. The pneumatic robot uses air-powered “muscles” that can curl in many dimensions, and New Scientist notes the team has experimented with adding a camera, syringe, or suction gripper at the end to make it more useful. Robots like this may end up having uses in medicine, in the home, or possibly in search-and-rescue situations.

Bot Vid: Canada Goes To Mars

Even as Curiosity roams the sands of Mars, other efforts to explore the red planet are being researched here on Earth, and one effort that highlights future low-cost missions is the work of the Autonomous Space Robotics Lab at the University of Toronto. The small-wheeled bit is years from being space-bound, or even hardened against Mars’s environment, but the Automaton blog reports it’s a pretty sophisticated methane-hunter, and has a 3-D camera, LIDAR, and a gas-powered generator.


Bot News

Chatty Robots. Work at the University of Aberdeen in Scotland is attempting to bridge the communication gap between robots and the humans who’ll be working around them, or even ordering them to perform tasks. One goal is to let robots explain why they chose a particular action, which is a communication channel that will become critical as more intelligent machines hit the factory floor and the home. The system uses sophisticated natural language generation techniques–which is almost the opposite of how systems like Siri understand natural language commands.

Japan’s Robot Student. Fujitsu is embarking on an effort to boost robot artificial intelligence, and has revealed plans to develop a robot capable of getting a “high” grade in Japan’s national university entrance exam inside four years. By 2021 it wants the bot to be able to pass the much more demanding exam set by Japan’s prestigious Todai university. The bot will have to be savvy in diverse topics like world history, physics, math, and foreign languages.

iPhone 5 To Boost Robots? Now that Apple’s revealed its new iPhone 5, robot maker Quantum International Corp is suggesting that Apple’s new mobile device may be a serious boon for robotics. Quantum says the phone’s “unparalleled” mix of sensors and processing power makes it ideal for “powerful” robotic brains. Quoting innovations like the Sphero toy and the Parrot AR drone, the company is seeking out potential partners for acquisition and joint ventures. If you thought smartphones were just about making calls or playing Angry Birds, then it looks like you were wrong.


Bot Futures: The Pentagon Doesn’t Trust Robots, But Your Grandma Might

We’ve written on the complex social/psychological matter of trusting robots before, and it’s going to be critical sooner than you think–even before robots have better artificial intelligence–when robots are helping us at home and in hospitals.

But this week it’s emerged that a Pentagon study expresses concern about levels of trust in robots. Essentially it found that many military operators were wary of the robots they had to work with, doubting they’d work as they were supposed to. One suggested reason for this is that many in-service robots were rushed to theater almost as soon as they were invented or made, with not enough support systems or training, and a loosely defined use case.

As a solution, the Pentagon suggests a rethink of the word “autonomy,” and instead of seeing it as a sign that a robot is acting automatically, choosing to make decisions from data and acting by itself, the word should be seen as a partnership. All robots are supervised at some level by humans, and their software was written by humans with built-in limits.

The Pentagon is tackling an intellectual divide and an education matter. The people who conceive, then design and build robots tend to be highly educated engineers and scientists, savvy to the complex science behind each part of the machines they’re making. The military end users, and even officer-level decision makers, while highly trained, aren’t necessarily aware of the same matters. Tackling this divide is only going to get more critical for the Pentagon as more and more robots come on stream, with the “Big Dog” LS3 unit as a great example. It’ll become still more important when we let robots decide when to fire weapons, by themselves.

Meanwhile, a different body of research by the Georgia Institute of Tech has found that while you may think it’s young, cool members of the general public who’d be most pro-robot, the older generation is more accepting of robots in their life than you may have thought.


In a paper on “Older Adults’ Preference for and Acceptance of Robot Assistance for Everyday Living Tasks,” scientists discovered that elderly folks were very keen to let robots take up the load of housekeeping and laundry, and even to act as prompters when it’s time to take medication. But robots are out, in favor of humans, when it comes to more intimate tasks like bathing or getting dressed, and when eating. Perhaps it’s the missing compassion that’s at play here, as right now robots are dispassionate chunks of plastic and metal that don’t understand emotions or exhibit them.

But, strangely, even our somewhat limited robot tech right now doesn’t mean that robots aren’t useful when it comes to understanding human concepts of trust. MIT and Northeastern University have collaborated to use a Nexi android to help work out if a human is trustworthy or not.

The experiment saw 86 students engage in face-to-face conversations with someone else, or speak on a web chat. The face-to-face chats were recorded and then analyzed to see how much fidgety movement went on. Each participant pair then played a game which had real cash at stake. The research showed the players were less generous when they didn’t trust the other player, which probably tallies with your own experience playing poker. And, demonstrating how good humans are at reading each other’s faces, the players who spoke face-to-face were better at detecting dishonesty and thus were less generous when playing.

Then the players engaged with Nexi, while scientists controlled it from behind a screen–although the players thought they were engaging a machine. Nexi was commanded to make human-like moves such as touching its face or hands, or to lean back or protectively cross its arms while playing. These last two are the same nonverbal, often unconscious, cues that humans give…and the human players didn’t trust it as a result.

What that tells us is that robots are useful for investigating human emotional interactions, and also that if we design our robots properly–particularly with more human-like shapes and parts–then we can actually boost how much humans trust the droids.

[Image: Flickr user Luís Eduardo Catenacci]


Chat about this news with Kit Eaton on Twitter and Fast Company too.

About the author

I'm covering the science/tech/generally-exciting-and-innovative beat for Fast Company. Follow me on Twitter, or Google+ and you'll hear tons of interesting stuff, I promise.