Why Robots Are Learning Our Pain Threshold

Robots don’t know their own strength, so how do you teach them not to hurt humans? Answer: train one to hit people repeatedly. That could be vital information for an early draft of Asimov’s Laws of Robotics — or for the coming Robo-pocalypse.


How do you teach a robot how not to hurt humans? Train one to hit someone in an experiment, to find our pain limit. Sounds infinitely sensible, doesn’t it? Until you remember your dystopian sci-fi and consider the  implications.

The robot experiments are taking place at the lab of Professor Borut Povse in Slovenia. (Yes, he is probably well aware that he sounds like a Bond villain.) He’s been thinking about the future of human-machine interactions, when our daily lives involve working much more closely with robots than we do now. Working comfortably next to a robot — a humanoid like Asimo, say — will require the humans to trust that the machine won’t hurt them accidentally.

Povse spotted a key problem with this scenario: Machines don’t know how much energy in any given impact would result in pain to a person. Or to put it in laymen’s terms, robots don’t know their own strength. Hence he came up with an experiment to solve the problem. Somewhere in Solvenia there’s a robot punching volunteers at a variety of energies, with blunt or sharper “hammers,” so it can work out where the pain threshold is.

The plan is to use the data to inform the design of robots that will operate in close proximity to humans, so that they don’t make sudden movements with too much energy.

Interestingly, the iCub experimental robot recently demonstrated how it can watch out for man-machine collisions from the other point of view–when a clumsy person bumps the robot accidentally. The Italian Institute of Technology has been working on iCub’s 6-axis “force-torque” sensors, and giving the robot a kind of wobbly dynamic ability to safely absorb impacts without toppling (which also reduces the injury felt by the impactee). The sensors let the ‘bot be led by the hand through new tasks.

All of this sounds self-evidently sensible, when you’re talking about consumer-type robots. Because you wouldn’t want your robo-butler to accidentally hurt you, would you? It’s the first of Isaac Asimov’s famous robot laws: “A robot may not injure a human being.” But there are other implications. If you teach a robot the precise point at which pain sets in, you enable a robot operator with evil intentions to design torture machines. And while you’re thinking about robot war machines of the future, it would let defense companies build robots that know how to hit people where it hurts, without actually inflicting injury or killing them. Arguably that’s better than robot-induced death, but it’s an ethical can of worms that the U.S. government — judging by its recent experience with Predator drones — may not be best equipped to handle.

To keep up with this news, and more like it, follow me, Kit Eaton, on Twitter.

About the author

I'm covering the science/tech/generally-exciting-and-innovative beat for Fast Company. Follow me on Twitter, or Google+ and you'll hear tons of interesting stuff, I promise.