Teresa Guerrero Madrid

Madrid

Updated Thursday, February 1, 2024-20:01

To train all

artificial intelligence

(AI) systems, scientists

inject

enormous amounts of data into them so that they are able to operate and make decisions in a way similar to that of humans.

People, however, learn in a different way, little by little, in different environments and over the years. If we want machines that really think and learn like us,

why not teach artificial intelligence the same way we teach a baby?

That has been the idea that a team from New York University has had, which has designed an experiment in which they have turned a child into an AI professor.

The results of the experiment were published this Thursday in the journal

Science

and its authors hope that they will have two uses: that AI systems learn in a way more similar to that of people and to better understand how humans learn and acquire language.

Researchers at New York University analyzed a child's learning process by video recording in first person, through a lightweight camera mounted on a hat or headband and placed on his or her head. Information collected through the eyes and ears of a single child was used to train a multimodal artificial intelligence system.

To know more

Future.

The great challenge of artificial intelligence: "It will soon be impossible to distinguish truth from lies"

  • Editor: RODRIGO TERRASA Madrid

  • Editorial: IMAGES: UNITED UNKNOWN

The great challenge of artificial intelligence: "It will soon be impossible to distinguish truth from lies"

Space exploration.

Artificial intelligence is learning to find extraterrestrial life

  • Editor: TERESA GUERRERO Madrid

Artificial intelligence is learning to find extraterrestrial life

Each child is different, but in general,

they begin to learn their first skills from half a year of age,

normally between the first six and nine months of life, when they begin to connect words with the objects and concepts they see in the world. world. According to this study

, when they are between one and a half and two years old, most can understand an average of 300 words

. However, it is not well understood how children acquire their first words, and how this link between words and visual representations is made.

Although this topic has been the subject of debate and several hypotheses have been proposed, the scientists behind this work recall that early language acquisition has traditionally been examined in laboratory settings with findings that cannot be generalized to what occurs in a setting. natural, such as the daily life of a child and the activities they carry out when they are awake. Better understanding this process in children could allow next-generation artificial intelligence systems to develop links between words and visual representations, explains this team led by Wai Keen Vong.

NYU Center for Data Science / InfantAIVideo2024

The experiment lasted a year and a half and consisted of placing a light camera on the head of the same child. Recordings were made of the images and sounds that the child saw while he was growing,

between six months and 25 months of age

, on a weekly basis (approximately). He was recorded in all kinds of situations, for example, when he ate, played at home with his toys or in the park, while books were read to him or he interacted with his pet.

They collected hundreds of hours of recordings, of which they used 61. From these recordings taken during the days in which the camera was placed, the researchers trained an artificial intelligence system called Child's View for Contrastive Learning model (CVCL). to determine if he could learn words and concepts present in a child's everyday experience.

Wai Keen Vong

The findings published this Thursday in

Science

show that the model, or neural network, could, in fact, learn a substantial number of words and concepts present in the child's daily life using limited fragments of what he experienced. Although the video only recorded approximately 1% of the hours in which the child was active, that was enough for the machine to learn the language.

In addition, some of the words that the model learned could be generalized in situations different from those seen in training, which, according to the scientists, reflects an aspect of generalization that is also observed in children when tested in the laboratory.