A can opener. A piece of fabric. A toy shark that looks straight out of a Fort Lauderdale gift shop circa 1978. A bundle of wire. An old sneaker that must smell. For a human, picking up any of these mundane objects relies upon a ballet danced by two arms and 10 fingers, choreographed by a brain developed over millions of years of evolution.
So a new robotic platform called Dex-Net 2.0, designed and developed by researchers at University of California, Berkeley, doesn’t claim to be quite as good as humans at picking up objects. But the researchers do believe it could revolutionize the way objects are packed and shipped at places like Amazon, or even bring an automated Marie Kondo-bot into your life. And it may be shocking to learn that its hand is a simple pincer that’s coated in a special type of silicone.
In our everyday lives, the pincer itself is a surprisingly effective design. Whether it’s the tweezers we use to pluck unwanted hairs or chopsticks that gracefully grasp grains of rice and slippery noodles, pincers simply work. Some amputees even prefer the somewhat dated, squeezing, hooked prostheses over more robust artificial hands.
The scientists at UC Berkeley wanted to make a standard pincer for a robotic arm that would work even better–by changing what material it gripped with.
Generally, a robot with a pincer hand can pick up difficult-to-grasp objects just over a quarter of a time. Wrap the pincer in tape, and the success rate skyrockets to almost two-thirds of the time. Gecko-inspired grips do even better. They work 80% of the time. But the new silicone tip developed by UC Berkeley researchers works more than 93% of the time.
In other words, the same complicated robot arm hardware, running the same depth sensing software, works three times better with a $.02 piece of silicone attached to it. (Imagine what it could do for chopsticks!)
Of course, it’s not just any silicone, it’s very well-designed silicone. To develop it, researchers qualitatively assessed several approaches to these silicone “fingers,” making the shape convex or concave, adding spider-web style spoking, carving out fingerprints like on human hands, and simply gridding the surface in nubs. They put 67 shapes through 37 design iterations and 1,377 trials, and eventually found that the grid worked best–even better than gecko material (though they do mention that the objects they were picking up to test their designs were rough, which is less advantageous to geckos).
These gridded silicone tips are remarkable, but keep in mind, that 93% figure pertains to the hand grabbing difficult-to-grasp–but not unique–objects. There were only eight shapes in total tested. Which is why the fingers are only part of the design that makes the new Dex-Net so good at its job. The other part of it is the brain, which is capable of strategizing how to grab objects its never seen before.
“I’ve been studying robot grasping for 30 years, and I’m convinced that the key to reliable robot grasping is the perception and control software, not the hardware,” says Ken Goldberg, a professor at UC Berkeley who led the study. From chopsticks to surgical clamps, the pincers that we wield in our daily lives are only as effective as the logic that controls them.
The software was built with a technique that will be surprising to no one: machine learning. A data set of 3D object shapes was combined with a physics model of grasping objects and some other data, which all combined to create 6.7 million training examples for the robot’s logic. The arm itself then sees an object through a stock Kinect depth sensor, and analyzes the shape. And even when tackling a new object, it will successfully grasp it 99% of the time–all with off-the-shelf hardware, save for that little strip of gridded silicone.
If that success rate seems unbelievably high to you, know that the results were an eye-opener to the team, too, as their robot handled old shoes and thin fabric without a second thought. “We were surprised by how well it performs with novel objects it wasn’t trained on,” says Goldberg. “It seems to work because it’s trained on a critical mass of examples of robust grasps, similar to recent results in computer vision and speech recognition.”
Indeed, because of similar machine learning methods, computers can now recognize objects better than humans can. And speech recognition is actually usable with virtual assistant like Siri and Google Now. Does that mean Dex-Net could best humans anytime soon?
“Humans are amazing: any busboy can pick up everything on a table for four and carry it in two hands,” says Goldberg. “Robots are still very far from human capabilities.”