Typically robots learn by direct input from their developers. Lines of code or manual control is the standard way most robots learn, but now some computer scientists at the University of Washington are taking learning cues from human babies to help teach their robot babies.
Published in PLOS One in early November, the team partnered up with the university's developmental psychologists to understand how infants learn.
The psychologists helped guide the researchers' computer algorithms to reflect the data they've gathered from their infant research studies and tested them with the robots. They wanted to see how much, if at all, the robots infants would behave similarly to human infants.
Popular Science shares UW psychology professor Andrew Meltzoff's question regarding the guidance of human infants in the study.
"Babies learn through their own play and by watching other and they are the best learners on the planet - why not design robots that learn as effortlessly as a child?"
There were two steps to test their hypothesis. The first is an experiment done with a computer simulation to learn to follow a human's gaze and the second is an experiment done with a human to imitate their actions.
With the gaze experiment, the robot is simulated and learns that its own head movements and the movements of a human's head are similarly controlled. Watching how the human head pivots from one point to another to fixate on an object, the robot then follows suit, using what it learned earlier about head movements.
They even taught the robot about blindfolds and what happens when being blindfolded, mimicking an experiment done with human infants by Meltzoff, who is also the UW's Institute for Learning and Brain Sciences Lab (I-LABS) co-director. Meltzoff's groundbreaking research showed that children as young as 18 months can infer the goals of an adult's actions and find different ways to achieve that goal themselves.
In the second experiment, a robot was allowed to experiment with different objects on a tabletop, manipulating the locations, and moving them around the surface.
The robot used the information learned from that experience to follow a human who would move objects around the table or clear everything off entirely. Instead of strictly following the human's actions each time, the robot tested different methods to try and get the same results.
Rajesh Rao, a UW professor of computer science and engineering helps explain the implications of the experiments saying "You can look at this as a first step in building robots that can learn from humans in the same way that infants learn from humans. "
The next steps for the UW team is to explore how understanding goals and copying behaviors can help the robots learn more complicated tasks. From their research maybe someday you could be teaching a robot how to wash the dishes.
Join the Conversation