Reverse-engineering the infant mind

May 31, 2011

(Credit: stock image)

A new study by MIT shows that babies can perform sophisticated analyses of how the physical world should behave.

The scientists developed a computational model of infant cognition that accurately predicts infants’ surprise at events that violate their conception of the physical world.

The model, which simulates a type of intelligence known as pure reasoning, calculates the probability of a particular event, given what it knows about how objects behave.

The close correlation between the model’s predictions and the infants’ actual responses to such events suggests that infants reason in a similar way, says Josh Tenenbaum, associate professor of cognitive science and computation at MIT.

Reverse-engineering infant cognition

The study is the first step in a long-term effort to “reverse-engineer” infant cognition by studying babies at ages 3-, 6- and 12-months (and other key stages through the first two years of life) to map out what they know about the physical and social world. That “3-6-12” project is part of a larger Intelligence Initiative at MIT, launched this year with the goal of understanding the nature of intelligence and replicating it in machines.

Tenenbaum and Edward Vul, a former MIT student who worked with Tenenbaum and is now at the University of California at San Diego, developed a computational model, known as an “ideal-observer model,” to predict how long infants would look at animated scenarios that were more or less consistent with their knowledge of objects’ behavior.

The model starts with abstract principles of how objects can behave in general (the same principles that researchers showed infants have), then runs multiple simulations of how objects could behave in a given situation.

In one example, 12-month-olds were shown four objects — three blue, one red — bouncing around a container. After some time, the scene would be covered, and during that time, one of the objects would exit the container through an opening.

If the scene was blocked very briefly (0.04 seconds), infants would be surprised if one of the objects farthest from the exit had left the container. If the scene was obscured longer (2 seconds), the distance from exit became less important and they were surprised only if the rare (red) object exited first. At intermediate times, both distance to the exit and number of objects mattered.

The computational model accurately predicted how long babies would look at the same exit event under a dozen different scenarios, varying number of objects, spatial position and time delay.

This marks the first time that infant cognition has been modeled with such quantitative precision, and suggests that infants reason by mentally simulating possible scenarios and figuring out which outcome is most likely, based on a few physical principles.

Babies understand physical principles and how the world works

Tenenbaum plans to further refine his model by adding other physical principles that babies appear to understand, such as gravity or friction. He is also developing similar models for infants’ “intuitive psychology,” or understanding of how other people act. Such models of normal infant cognition could help researchers figure out what goes wrong in disorders such as autism.

Another avenue of research is the origin of infants’ ability to understand how the world works. In a paper published in Science in March, Tenenbaum and several colleagues outlined a possible mechanism, also based on probabilistic inference, for learning abstract principles from very early sensory input.

Ref.: Ernő Téglás et al., Pure Reasoning in 12-Month-Old Infants as Probabilistic Inference, Science, 27 May 2011: 1054-1059. [DOI:10.1126/science.1196404]