DeepMind: AI learns basic physics from videos

A toddler learns amazingly quickly how to gain new insights into the world: through observation, trial and error. For example, around the age of one year, babies are able to grasp something called “object permanence” — the fact that things don’t just appear or disappear. Despite all the advances in machine learning, AIs are still struggling with it – until now: Researchers at the Google subsidiary DeepMind have presented an AI that is said to have learned an intuitive understanding of physics. To show that the AI ​​actually uses these concepts, the researchers used approaches from developmental psychology. The associated paper appears in the journal Nature Human Behaviour.

When training the model, the researchers showed their model, called Plato, videos from a synthetic, self-created data set, in which the physical behavior of objects could be seen: balls rolled and bounced, collided with other objects, were sometimes covered by other objects at times, Objects hit the ground. This should teach Plato five basic concepts of physics: that physical objects do not suddenly appear or disappear, that they occupy a space that no other object can occupy at the same time, that they do not change their shape or size suddenly, and that movements usually do not change direction suddenly and change speed.

From a technical point of view, the model called Plato consists of known components: An autoencoder learns an internal representation from the video images – in contrast to other models, which learn characteristic features of the entire frame, in this case the researchers used segmentation masks to force the model formed internal representations of individual objects. A second stage of the model, a so-called long-short-term memory, then uses the observed behavior of the object to form hypotheses about its behavior in the future.

In order to test whether the model has learned these models, the researchers then showed the AI ​​video sequences in which one of these basic physical concepts is violated. Young children respond to such startling experiences with increased attention because their expectations of what will happen next are violated. The developmental psychologist Jean Piaget, born in 1896, was the first to formulate a hypothesis as to why this is actually the case: According to this, actions and sensory perception in humans are connected to form “sensorimotor loops”: every action is linked to an internal prediction of what will happen next, the is then compared with the subsequent sensory impression. If prediction and sensory stimulus deviate from each other, this triggers attention and a learning process.

The Deepmind researchers are now using this “violation-of-expectation paradigm” from developmental psychology by comparing the internal prediction of the model with the external observations in the experiment. In fact, the test cases in which the objects behaved “unphysically” produced a large prediction error. And this after a short training phase – only 28 examples – and even if objects were used in the test that did not appear in the training data set.

In contrast, other models, which Luis Piloto from Deepmind and colleagues used for comparison and which were not trained on the behavior of individual objects, showed no corresponding prediction errors. However, according to Piloto, this could be because these models “have to take into account a lot more details. For example, what color the objects are, whether they are big or small. And with all these details, they first have to learn whether they are important or not . And for that you need a lot more examples”. According to the researchers, concentrating the model on discrete objects is therefore an important step.

However, whether the model has really learned something like an intuitive understanding is a question of interpretation. Research into the intelligence of animals, for example, shows how difficult this can be. For example, British zoologist Antone Martinho-Truswell of Magdalen College, Oxford trained newly hatched ducklings to follow objects such as spheres or cuboids that were moved by motors – say two blue spheres. The researchers then had the animals choose between two balls of different colors and two objects of the same shape but different colors. The result: the ducklings chose the identical shapes.

“We wanted to show that any system that can identify any object is also capable of learning abstract categories,” says Martinho-Truswell. “It doesn’t mean the ducks learned an abstract concept. All we can say is that they behave as if they learned an abstract concept.” In fact, the Deepmind researchers are reluctant to interpret their results. “Our model doesn’t directly answer questions from developmental psychology,” says Peter Battaglia of Deepmind, who was also involved in the study. “Perhaps our model is oversimplified, but we hope it can be a starting point to test hypotheses about human learning.”

However, Plato is not the first AI to be able to learn basic physical laws or even cause-effect relationships. For several years, groups around the world have been researching using a wide variety of methods to teach AI systems “causal inference”, the recognition of cause and effect. And a central part of all these techniques is testing the learned models with an internal prediction. In addition, roboticists are also working in the specialist discipline of “developmental robotics” on allowing robots to learn like small children via sensorimotor loops.



More from MIT Technology Review


More from MIT Technology Review

More from MIT Technology Review


(wst)

To home page

#DeepMind #learns #basic #physics #videos

Leave a Comment

Your email address will not be published.