Rather than using computers to help children learn, one group of researchers at the University of California, Berkeley, is far more interested in using children to help computers learn. It’s not the child-slavery proposition that it sounds like. In cognitive development labs at the university, psychologists are using puppets, flashing toys, lollipops, and a variety of other tools to determine how young children–some not even talking yet–make calculations in their head that help them understand the world around them. By studying how the kids’ fast-growing brains process information, the psychologists and their computer-scientist colleagues hope to create computers that think and react in more human-like ways.
While people constantly assess one another’s mental state and use it to inform how they interact with each other, computers aren’t yet able to evaluate a user’s mood. But imagine if your computer could interpret facial expressions and tone of voice so as to read your frustration level, or put two-and-two together to understand that you work more slowly before you’ve had your morning coffee. It would be a huge leap forward for artificial intelligence.
“We’re trying to understand what makes human beings such good learners. We learn language, causal relationships, and new concepts from small amounts of data,” says Tom Griffiths, director the university’s computational cognitive science lab. “And children are particularly interesting because they’re doing the largest amounts of learning. In just a few years, a child is going to speak a language, understand causal relationships in the world around him and learn concepts, like TV and computers, that haven’t appeared anywhere in our evolutionary history.”
The cognitive psychologists are testing infants, toddlers, and preschoolers to better understand how they figure out the world around them. One of the psychologists had toddlers watch her while she tasted different foods while making faces, then showed that the children were capable of empathy and could pick up on her preferences. Another one showed that even babies who can’t yet speak seem to be capable of calculating odds ratios. When the researcher showed them two jars of candy, with different proportions of black and pink lollipops in each, then removed one from each without showing them the color, the infants almost always crawled toward the hidden pop removed from the primarily pink jar.
Figuring out how kids’ developing brains make these calculations could lead to more intuitive computers that can interact more sensitively, intelligently, and responsively, in applications ranging from language learning, online tutoring and call-answering, to research labs in need of smarter processing power. “We have computer scientists, but we don’t have computers that are scientists. That kind of causal reasoning and discovery is still something humans can do that computers can’t,” Griffiths says.
He and his colleagues in computer science and psychology are launching a new multidisciplinary center to consolidate their work on infant, toddler, and preschooler cognition and computer programming. “We want to understand how children learn and develop, in order to make better systems that allow machines to solve the kinds of problems that humans are still better at than computers.”