China News Service, March 26. The 2024 Annual Meeting of the China Development Forum will be held on March 24-25, 2024. On the afternoon of March 24, the "Symposium on Artificial Intelligence Development and Governance" was held. Zhang Yaqin, dean of the Institute of Intelligent Industry of Tsinghua University and academician of the Chinese Academy of Engineering, said in his speech in the "Group Discussion II" session: Countries must work together in the global intelligence field. Cooperate to jointly respond to artificial intelligence red lines as social risks.

  Zhang Yaqin briefly summarized the development trends of large artificial intelligence models. The first is multi-modal, multi-scale, and cross-modal new intelligence; the second is that there will be major breakthroughs in the entire architecture in the next five years, following the Scaling Law. , but it is not necessarily a transformer model structure; third, intelligence gradually extends to the edge side of "artificial intelligence mobile phones" and "artificial intelligence PCs"; fourth, autonomous intelligence that can realize task definition, planning paths, self-upgrading, self-encoding, etc.; fifth, information Form-based intelligence is gradually moving towards the physical world; sixth is biological intelligence that connects large models with living organisms.

  Zhang Yaqin believes that in the next five years, with the large-scale application of artificial intelligence technology in various major fields, there will be three risks, including: the risk of the information world, including the risk of errors and false information; the problem of large model illusion that has emerged, and with the When the large-scale application of information intelligence extends to physical intelligence and biological intelligence, risks will also increase in scale; risks arise when large models are connected to economic systems, financial systems, military systems, and power networks.

  How to prevent these risk issues, Zhang Yaqin put forward five long-term suggestions.

  One is to label intelligent entities such as digital humans produced by artificial intelligence just like advertising.

  The second is to establish a mapping and registration mechanism. It is clear that the robot, as a subordinate, must be mapped to the subject. This subject can be a legal entity such as a person or a company. If there is a problem with the robot, the subject's responsibility can be traced.

  The third is to establish a hierarchical supervision system and mechanism. Provide hierarchical supervision for large models applied in different fields such as the physical world and biological systems.

  The fourth is to increase investment in large model risk research and call on governments, scientists, technology practitioners and entrepreneurs to participate together to develop and govern at the same time.

  The fifth is to formulate red lines for the development of artificial intelligence. At present, the Beijing Artificial Intelligence International Security Dialogue has reached the "Beijing International Consensus on AI Security", proposing red lines for the development of artificial intelligence. Countries must work together in the global intelligence field and jointly respond to artificial intelligence red lines as social risks.