Facebook says it’s using robots to help advance its work in artificial intelligence. The work is similar to the (mildly creepy) autonomous robot work going on at Boston Dynamics, but different in that Facebook is opening up its research to everyone in the field, a, Facebook researcher tells Fast Company. That’s why the company just released several research papers describing its work since last summer, when the robotics lab opened.
Facebook is trying to develop artificial intelligence models that will allow robots–including walking hexapods, articulated arms, and robotic hands fitted with tactile sensors–to learn by themselves, and to keep getting smarter as they encounter more and more tasks and situations.
In the case of the spider-like hexapod (“Daisy”) I saw walking around a patio at Facebook last week, the researchers give a goal to the robot and task the model with figuring out by trial and error how to get there. The goal can be as simple as just moving forward. In order to walk, the spider has to know a lot about its balance, location, and orientation in space. It gathers this information through the sensors on its legs.
“As it gathers information, the model optimizes for rewards and improves its performance over time,” say the researchers in a blog post published today.
Also, the data coming from the robot’s various sensors in real-world testing can be messy and uncertain, which creates an even bigger challenge for the model. An uneven surface or new wind conditions could knock the thing off course or even cause it to fall over.
This way of training a model (learning by doing) is different than the supervised learning other types of machine learning models undergo. For instance, a model designed to identify plants might be trained by showing it thousands of images, correctly labeled, and allowing the model to learn from the similarities between the images. The training data is relatively easy to come by; it’s available at open source sites, for example.
Like human babies, the robots start out with almost nothing. They lack an understanding of their environment and how their bodies move within it. That’s one of the reasons Facebook rewards the AI models behind the robots to explore and learn.
During my visit to the robotics lab in Menlo Park last week, Facebook research scientist Franziska Meier showed me how she sends a command to a robotic arm. After the arm moved, she told me that it had just received and responded to 300 instructions, not just one. She actually sent instructions on which joints should move, how fast, how much, and in what order, and probably a bunch of other things.
Part of the job of the model behind the robotic arm is to learn from itself. “We are sending commands to the joints and modeling the relationship between the commands and the movements of the joints,” Meier told me.
Developing this keen understanding of cause and effect (as humans develop when very small) will help the robot complete specific tasks later on. This might include exploring a specific environment and manipulating objects.
You might wonder why a social networking company is spending resources on robotics research. The answer is it’s ultimately more about the AI than the robots. “This work will lead to more capable robots, but more important, it will lead to AI that can learn more efficiently and better generalize to new applications,” the researchers said.