China News Service, Beijing, March 7th (Reporter Sun Zifa) In recent years, the rapid development and wide application of artificial intelligence (AI) technology has attracted much attention because of its superior performance in many aspects that surpasses humans.

However, a newly completed study by the team of the Institute of Automation, Chinese Academy of Sciences (Institute of Automation, Chinese Academy of Sciences) found that artificial intelligence-based neural networks and deep learning models "turn a blind eye" to the outline of hallucinations, and the "competition" between humans and artificial intelligence lies in hallucination perception On "Take it back".

A sample generated by the interlaced raster warping method.

Photo courtesy of Zeng Yi's research team

  Inspired by the phenomenon of hallucinatory contours that exist widely in human and biological visual systems, Zeng Yi's research team at the Institute of Automation, Chinese Academy of Sciences proposed a method to convert machine learning visual datasets into hallucinatory contour samples, and quantitatively measure the recognition of hallucinatory contours by current deep learning models. Ability, the experimental results prove that from the classic to the most advanced deep neural network, it is difficult to have good hallucination contour recognition ability like human beings, even the current most advanced deep learning algorithm in the interlaced grating effect (one of the hallucination recognition ability) The recognition is also far from the human level.

  This important research paper, which shows that there is still a significant cognitive gap between artificial intelligence and human beings in the outline of hallucinations, was recently published in the professional academic journal "Patterns" (Patterns) of Cell Press.

The study shows that at present, the human visual system is highly robust (also known as robustness, which generally refers to the robust and stable system adaptive ability in abnormal and dangerous situations) in the recognition of hallucinations. The deep learning system based on artificial intelligence There are still fundamental flaws compared to biological visual systems.

Pretrained model test results.

Photo courtesy of Zeng Yi's research team

  Why study?

  Researcher Zeng Yi, the corresponding author of the paper and head of the Brain-like Cognitive Intelligence Research Group of the Institute of Automation, Chinese Academy of Sciences, said that hallucination contours are a classic hallucination phenomenon in cognitive psychology. In the absence of color contrast or brightness gradient, the biological visual system can A clear boundary is perceived.

This phenomenon has been widely observed in humans and various animal species, including mammals, birds and insects.

  The perception of hallucinatory contours is ubiquitous in independently evolved visual systems, indicating that it plays a fundamental and key role in biological visual processing. Therefore, hallucinatory contour perception should also be a must-have ability for artificial intelligence visual systems.

Comparison of human experimental results with deep learning test results.

Photo courtesy of Zeng Yi's research team

  Previously, there were relatively few studies on hallucination contour perception of deep learning models. The robustness of deep learning models to hallucination contour perception is more complicated than image interference robustness. The main obstacle is the limited samples of hallucination contours.

The hallucination profiles analyzed by most studies were manually designed in previous psychological literature, and these test pictures cannot directly match the tasks of deep learning model training, and at the same time, due to the small number, it is impossible to form a test set with a relatively large scale , it is difficult to measure the hallucination contour perception ability of the deep learning model in the way of machine learning.

  How to research?

  Zeng Yi pointed out that this time, he mainly studied the ability of deep learning to recognize the interlaced grating illusion. The interlaced grating illusion is a classic hallucinatory contour phenomenon. The displaced grating will induce false edges and shapes without brightness contrast.

The standard interlaced grating illusion allows humans to perceive vertical lines in the middle when there are actually no physical boundaries.

Interlaced grating illusions are widely used in physiological studies to explore biological visual processing of hallucinated contours.

  The Brain-Inspired Cognitive Intelligence Research Group of the Institute of Automation of the Chinese Academy of Sciences proposed an image interference method called interlaced grating distortion as a tool to quantify the perception ability of the neural network model hallucination contour.

The method can be directly applied to silhouette images with external contours without texture information, thereby systematically generating a large number of hallucinated silhouette images.

Since different parameter settings can produce different degrees of hallucinatory effects, this study tested human subjects to understand the impact of different interference parameters on human subjects' ability to perceive hallucinatory contours.

Phenomena similar to endpoint-activated neurons and topologies predicted by theory.

Photo courtesy of Zeng Yi's research team

  For deep learning models, this study trains both fully-connected and convolutional networks, collecting 109 publicly available pre-trained models.

At the same time, this study also recruited 24 human subjects to evaluate the human hallucination contour perception ability and its impact on number and image recognition under different parameter settings.

  What's the gain?

  Fan Jinyu, the first author of the paper and an engineer in the Brain-Inspired Cognitive Intelligence Research Group of the Institute of Automation, Chinese Academy of Sciences, said that this research combines cognitive science and artificial intelligence, and proposes to convert traditional machine vision datasets into interlaced grating illusion images in cognitive science , for the first time to quantitatively measure the hallucination contour perception ability of a large number of public pre-trained neural network models, and to test the perception of hallucination contours by deep learning and neural network models from the perspective of neuron dynamics and behavior.

  All deep neural network models in this research experiment, regardless of whether they were trained or not and in what way, produced activations along the hallucinatory contours at the neurodynamic level.

Even so, however, this neurodynamic level of activation did not help the deep neural network eventually recognize hallucinatory contours at the behavioral level.

The only depth-enhanced model with relatively good perception of hallucination contours shows an endpoint activation effect, thus revealing that future breakthroughs will focus on the relationship between endpoint activation and hallucination contours.

  Zeng Yi concluded that the highlights of this research can be summarized in four aspects: one is to propose a method for systematically generating hallucination contour samples; the other is to combine visual cognition and machine learning data sets to realize the ability to perceive hallucination contours in neural networks. The third is to test a large number of publicly available pre-trained neural network models; the fourth is to find that the model with better perception of hallucination contours exhibits the endpoint activation phenomenon predicted by computational neuroscience theory.

  The biggest feature of this study is that it tested and partially re-examined the current seemingly successful artificial neural network model from the perspective of cognitive science, and proved that there is still a big gap between the artificial neural network model and the visual processing process of the human brain. It is just the "tip of the iceberg" of the significant distance between artificial intelligence and human cognition. The mechanism of brain operation and the essence of intelligence will continue to inspire the research of artificial intelligence, especially neural networks.

  "If you want to make a breakthrough in nature, artificial intelligence needs to learn from and be inspired by natural evolution, brain and mind, and establish an intelligent theoretical system. Only such artificial intelligence will have a long-term future." Zeng Yi said.

(over)