In 1984, a 34-year-old man was crushed by a robot arm. It didn’t matter that he had 15 years of experience in the diecast factory that would take his life, or had received training in operating robotics. At the wrong moment, he got in the machine’s way. And so his fate was the same as the dozens of other Americans who’ve died by the hands of robots in the last few decades–a trend that’s only slated to increase in line as factory automation rolls out into our streets, with smart cities and driverless cars.
But at MIT’s Computer Science and Artificial Intelligence Laboratory, researchers have been pioneering an intriguing solution. They’re creating robots that we can oversee with our thoughts–and more specifically, robots that recognize when we think they’re doing something wrong.
“As you watch the robot, all you have to do is mentally agree or disagree with what it is doing,” explains CSAIL director Daniela Rus in a press release. “You don’t have to train yourself to think in a certain way–the machine adapts to you, and not the other way around.”
To accomplish this feat, MIT attached EEG readers to a person’s scalp. These electrodes can measure brainwaves, and they’re already in use with all sorts of software that allows users to control things by focusing their minds. But that software generally has a serious learning curve. You must learn to think a certain way to activate these systems, a feeling akin to trying to move just your ring toe.
MIT, on the other hand, focused on measuring one particular type of brainwave: error-related potentials (or ErrP) signals. These faint electrical impulses appear when we recognize something is wrong. Researchers placed subjects in front of a robot arm, tasked with grabbing one of two objects. The correct object would illuminate with an LED that the robot didn’t see. And as the robot reached, the subject would automatically think whether that was the right decision or wrong one.
In just 10-30ms, researchers were able to recognize the person’s ErrPs about 65% of the time. In other words, they could feasibly stop a robot from making a bad decision in its tracks a majority of the time–simply by watching its behavior.
Of course, it’s hard to imagine how this sort of system could scale beyond the industrial context. If more robots are scattered across our lives, would we need to walk around all the time with 50-or-so electrodes attached to our heads? And yet, maybe it’s not so far-fetched to imagine a scenario where you’re sitting in your self-driving Toyota on the way to work as it starts to cut another car off. All you’d have to do is experience that moment of stressful panic–and, in bracing for impact, you might avoid it altogether.
[Photos: Jason Dorfman/MIT CSAIL]