In recent years, global artificial intelligence technology has developed rapidly and become increasingly widely used. In particular, generative artificial intelligence, represented by ChatGPT, has spawned innovative applications in many fields, bringing more opportunities to all walks of life.

While empowering human social and economic development, the "rapid advance" of artificial intelligence is also accompanied by potential security risks and challenges. In particular, the impact of AI technology on the legal, ethical and humanitarian aspects and its complex impact on international politics, economy, military, society and other fields have aroused the attention and discussion of the international community. Cross-border cooperation and normative governance of AI are particularly urgent.

Earlier this month, 28 countries and regions, including China, the United States, the United Kingdom, and the European Union, signed the Bletchley Declaration at the inaugural AI Security Summit, agreeing to work together to build an "internationally inclusive" network of cutting-edge AI security science research to deepen understanding of the risks and capabilities of AI that are not yet fully understood.

In October this year, China proposed the Global Artificial Intelligence Governance Initiative (hereinafter referred to as the "Initiative") during the 10rd Belt and Road Forum for International Cooperation, which systematically expounded China's plan for AI governance in three aspects: AI development, security and governance, and provided constructive solutions for global AI governance.

People-oriented, intelligent and good

In the initiative, China proposes that the development of artificial intelligence should adhere to the concept of "people-oriented", with the goal of improving the common well-being of mankind, and on the premise of ensuring social security and respecting human rights and interests, so as to ensure that artificial intelligence always develops in a direction conducive to the progress of human civilization. Actively support the use of artificial intelligence to help sustainable development and address global challenges such as climate change and biodiversity conservation.

Zeng Yi, director of the Center for Artificial Intelligence Ethics and Governance at the Institute of Automation, Chinese Academy of Sciences, believes that artificial intelligence, as an "enabler" technology for social development, primarily serves to improve human well-being, so it is necessary to ensure the robustness and security of artificial intelligence technology. China's "people-oriented" approach to the initiative reflects that it is a global and all-inclusive governance solution.

Will the development of artificial intelligence open an unexpected "Pandora's box"? According to Zhang Xin, director of the Digital Economy and Legal Innovation Research Center at the University of International Business and Economics, the development of artificial intelligence, especially generative artificial intelligence, has not only led to the aggravation of risks in traditional technologies, but also brought a series of emerging risks, affecting the social ethical order and bringing a series of security and ethical challenges.

"The risks of AI technology are hidden, cross-domain, global, and long-term. Although there are currently a series of laws and regulations, ethical norms and technical standards to govern AI, the tendency of capital to pursue profits may lead to technological alienation, which requires sustained, systematic and coordinated governance efforts at the international level. Zhang Xin said.

In the proposal, China clearly stated that the development of AI should adhere to the purpose of "intelligence for good", abide by applicable international law, conform to the common values of peace, development, fairness, justice, democracy and freedom, and jointly prevent and combat the misuse of AI technology by terrorism, extremist forces and transnational organized crime groups.

In fact, since the beginning of this year, many countries and regions, including the United States, the United Kingdom, and the European Union, have begun to carry out the governance of artificial intelligence and put forward their own governance plans. Zhu Rongsheng, a special expert at the Center for Strategic and Security Studies at Tsinghua University, pointed out that some of these plans focus on governance principles, while others throw out action guidelines, but what the international community needs most at present is a text that can highlight the international consensus and outline, and China's initiative provides an important basis for the consensus text of AI governance.

Improve the legal system and put ethics first

China is a big country in artificial intelligence. Data show that the scale of Chinese's artificial intelligence core industry has reached 5000 billion yuan, the number of enterprises has exceeded 4300,2022, and innovative achievements are constantly emerging. According to data from the World Intellectual Property Organization, Chinese enterprises and institutions applied for nearly 3,40 AI-related patents in <>, accounting for more than <>% of global AI patent applications.

While the development of technology and related industries is being pursued, China has been committed to improving the safety, reliability, controllability and fairness of AI technology.

In April this year, the Cyberspace Administration of China (CAC) issued the Measures for the Administration of Generative AI Services (Draft for Comments), which supports independent innovation, promotion and application of basic technologies such as AI algorithms and frameworks, and international cooperation, while clarifying that the provision of generative AI products or services shall comply with the requirements of laws and regulations, respect social morality, public order and good customs, and prohibit the illegal acquisition, disclosure and use of personal information, privacy and trade secrets.

In the initiative, China also proposes to gradually establish and improve laws and regulations to ensure personal privacy and data security. Adhere to ethics first, establish and improve AI ethics guidelines, norms and accountability mechanisms, form AI ethics guidelines, and establish a scientific and technological ethics review and supervision system; Adhere to the principles of fairness and non-discrimination, and avoid prejudice and discrimination against different or specific nationalities, beliefs, countries, genders, etc.

Zhang Xin said that at present, the rule of law characteristics of global AI governance are becoming increasingly prominent, and it is gradually shifting to a new governance model of "soft law and hard law in parallel, and combining rigidity and softness". "China's initiatives to establish and improve AI ethical norms, norms and accountability mechanisms are not only in line with the trend of international governance, but also can effectively respond to the new challenges brought about by AI technology, reflecting China's constructive contribution to the construction of a global AI governance order."

Wu Shenkuo, doctoral supervisor of the Law School of Beijing Normal University and deputy director of the Research Center of the Internet Society of China, pointed out that in the face of the rapid development of artificial intelligence, some countries and regions have deficiencies in cognition and governance capabilities, and some countries use technological advantages to promote technological hegemony, which damages the development interests of other countries and peoples. "All parties should work together to implement China's initiative and properly address the conflicting rules, social risks and ethical challenges brought about by the development of science and technology."

Equal development Bridging the gap

A study by the International Monetary Fund found that new technologies, such as artificial intelligence, could widen the gap between rich and poor countries by redirecting more investment to already automated advanced economies. Experts pointed out that in the field of artificial intelligence, there is an "intelligence gap" and governance gap between developing and developed countries, and they are facing the risk of "generation gap" with developed countries in terms of science and technology and industrial development.

In this regard, China proposed in the initiative that all countries, regardless of size, strength or weakness, and regardless of social system, have the right to develop and use artificial intelligence on an equal footing. Enhance the representation and voice of developing countries in the global governance of AI, ensure equal rights, equal opportunities and equal rules for AI development and governance in all countries, carry out international cooperation and assistance for developing countries, and continue to bridge the intelligence gap and governance capacity gap.

Zhu Rongsheng said that at present, there is a short-term characteristic of "the strong go first, and the strong set the law" in the development and governance of artificial intelligence, and some developing countries with a weak technical foundation may fall into the dilemma of "the weak are backward and follow the strong". "China supports the participation of developing countries in the formulation of rules for global governance of AI, and calls for international cooperation and assistance with developing countries through infrastructure, talent training, joint research and development, market exchanges, etc., which is of great significance for the global governance of AI."

At the first AI Security Summit, China proposed that, as a member of developing countries, China is willing to strengthen exchanges and communication with all parties on AI security issues, reflect the common concerns of the "Global South" about emerging technologies, contribute wisdom to the formation of an international mechanism with universal participation and a governance framework with broad consensus, promote AI technology to better benefit mankind, promote global sustainable development, and jointly build a community with a shared future for mankind.

Zhang Li, a researcher at the China Institute of Contemporary International Relations, believes that the initiative responds to the general concerns and concerns of the international community, especially the developing countries, about the development of artificial intelligence, and fully takes into account the demands of developing countries that the development of artificial intelligence technology will pose a threat to their national security and sovereignty. China has spoken out for developing countries on the issue of AI governance, which is a major issue in the international community, reflecting China's responsibility as a major country to promote the improvement of global governance.

Many countries and regions are concerned about AI governance

European Union

In June this year, the plenary session of the European Parliament voted to approve the draft negotiation authority for the proposal of the "Artificial Intelligence Act", which entered the final stage of negotiations on EU legislation to strictly regulate the use of artificial intelligence technology. This is the world's first regulation on artificial intelligence. A striking feature of the draft bill is the focus on risk-based regulatory regimes to balance the innovative development of AI with safety norms.

United States

On October 10 this year, U.S. President Joe Biden signed an executive order setting new standards for AI safety. The executive order requires developers of the nation's most powerful AI systems to share their security test results and other critical information with the government; Improve relevant standards and testing tools to ensure the safety and reliability of AI systems; Establish new standards for rigorous synthetic biology inspections to prevent the risk of using artificial intelligence to design hazardous biological materials; Establish standards and best practices for detecting AI-generated content and verifying official content to help protect citizens from AI-driven fraud; Establish advanced cybersecurity programs and develop AI tools to find and patch critical software vulnerabilities; Develop a "National Security Memorandum" to guide further actions in the area of AI and security.

Other countries

Japan, Singapore, Canada and Germany have formed their own legislative directions according to the basic scientific and technological capabilities and strategic development needs of each country. Faced with the legal issues brought about by AI security, Japan chooses to start with personal information and data protection, and promotes the formation of a collaborative R&D network between the government and enterprises by clarifying the division of rights and responsibilities of data subjects, so as to build an ecosystem conducive to the safe development of AI. Germany's legislation on AI safety focuses on the impact of technology on the economy and life, focusing on topics such as autonomous driving and smart healthcare, which are closely related to improving the human economy and life. Singapore and Canada focus on the value of artificial intelligence as a strategic technology to empower the economy and society, and carry out relevant legislation starting with data and security supervision.

Source: Xinhua News Agency, International Institute of Technology and Economics, Development Research Center of the State Council, etc

Reporter Liu Yao Source: People's Daily Overseas Edition