Robots have a tough life.
A few years ago, a hitchhiking bot designed to test whether people would be kind to robots was found damaged beyond repair in an alley in Philadelphia. Japanese researchers have observed children in a shopping mall punching and kicking a robot that was supposed to help shoppers. One security robot deployed in San Francisco in 2017 was “battered with barbecue sauce, allegedly smeared with feces, covered by a tarp, and nearly toppled by an attacker,” according to the Washington Post. Boston Dynamics’ tongue-in-cheek “robot abuse” videos have millions of views on YouTube. For many people, there’s something tragically comic about seeing a robot–the manifestation of human fears over automation and the symbol of a sci-fi techpocalypse–meet its demise, even if it’s by suicide. A defenseless robot seems to provide too great a temptation to “test” it with harassment (after all, they can’t fight back).
The long list of instances of humans hurting robots underlines serious questions about ethics and technology. How should we treat robots? At what point does a robot deserve to be treated like a sentient being? And if we don’t treat them as such, what does that say about us? While it might be entertaining to think about people vandalizing robots that are operating in the public sphere, the phenomenon means that there’s work to be done in determining what are acceptable ways to treat these machines–or new types of beings, depending on how you look at it.
With these questions in mind, a group of undergraduate researchers at the Seoul, South Korea-based Naver Labs set out to test whether they could design a robot that would discourage this kind of abuse, specifically in children. Rather than relying on security or legal protection to stop people from abusing bots, they wondered if a robot itself could teach people how to act through savvy interaction design.
The result, a tortoise-shaped bot called “Shelly,” has a built-in defense mechanism–one that real-life turtles use naturally. When children pet it nicely, the robot’s 3D-printed shell lights up with LEDs and it moves its legs to indicate that it’s happy. But if a child starts to abuse it by hitting or kicking the poor turtle-bot, it does what any living tortoise would do: It curls up into its shell, and stops interacting with the child altogether.
“At first, we tried to give some feedback that can show that the robot is angry when it gets abused, but we found that those feedbacks can actually foster abuse because children want to see the robot’s reaction,” Jason J. Choi, one of the researchers, tells Co.Design via email. “We concluded that stopping all the interaction for a certain period of time is effective for preventing the robot abuse as children want the robot to keep interacting with them.”
The students–Choi along with Hyunjin Ku, Soomin Lee, Sunho Jang, and Wonkyung Do–ran tests to see how children would respond to the different robot behaviors. They saw a reduction in abusive behavior with this kind of interaction design, with the greatest change in behavior occurring when the robot would hide and stop responding for 14 seconds before reemerging. The frequency at which children would abuse Shelly was almost cut in half. How to apply this insight to robots that will be around adults, who might have more complex reasons for abusing them, remains a question, but the researchers hope to continue studying how children interact with robots by developing a tracking system that can analyze children’s behavior and predict individual levels of abusiveness.
Shelly’s efficacy among kids does illuminate how this human tendency will be a concern for interaction designers as robots begin to infiltrate our public lives. Perhaps thoughtful interaction design, like Shelly’s, could prevent children from using a mall robot as their personal punching bag, or convince people to treat the hitchhiking robot with kindness–like you might any other member of society.