Chinanews.com, March 3 -- The American drama "Westworld" and the Hollywood movie "Terminator" have depicted that the "awakening" of artificial intelligence (AI) may endanger mankind. Recently, the concerns in science fiction films have come true, and many "bigwigs" in the artificial intelligence industry have suddenly called for a moratorium on the development of more powerful AI systems, suggesting that AI systems with human-level intelligence may "pose a potential risk to society and humans."

Where do their concerns come from? Will AI development jeopardize human "civilizational control"? Has the development of AI reached a turning point where the "pause button" needs to be pressed?

An open letter asking four questions about AI risks

Recently, American billionaire Elon Musk and top expert in the field of artificial intelligence, Turing Award winner Yoshua Bengio and others signed an open letter calling for a moratorium on the development of AI systems more powerful than GPT-4 for at least 6 months, saying that it "poses a potential risk to society and humanity."

An open letter calling for a moratorium on giant AI training. Image source: "Future Life Institute" website

The letter, posted on the website of the nonprofit Future of Life Institute, argues that before developing a robust AI system, it should be confirmed that the impact is positive and the risks manageable.

The letter details the enormous risks that AI systems with human-level intelligence could pose to society and humanity, saying that "this has been agreed upon by a lot of research and top AI labs," and throws four questions in a row:

- Should we let machines flood our channels of information with propaganda and lies?

- Should we automate all jobs, including those that are satisfactory?

Should we develop non-human minds that may eventually surpass us in numbers, surpass us in intelligence, and be able to eliminate and replace us?

- Should we risk losing control of our civilization?

The letter states that if such a moratorium cannot be implemented quickly, the government should step in and take action. In addition, "AI labs and independent experts should take advantage of this moratorium to jointly develop and implement a shared set of security protocols for advanced AI design and development, rigorously audited and supervised by independent external experts."

Infographic: Schematic diagram of a humanoid robot typing on a computer.

The letter also calls on developers and policymakers to collaborate to dramatically accelerate the development of robust AI governance systems. At a minimum, this should involve regulatory bodies, auditing and certification systems, monitoring and tracking high-performance AI systems, liability for injuries caused by AI, and public funding for AI technology safety research.

The letter concludes by mentioning that today, our society has suspended other technologies that could have catastrophic effects, and that the same should be true for artificial intelligence, "let us enjoy a long 'AI summer' instead of entering the autumn unprepared."

Technology, ethics, interests, what is the call for slowing down?

It is not difficult to see from these contents that the supporters of the open letter are more concerned about the possible "barbaric growth" of artificial intelligence than technology, and the social and ethical difficulties that its development may bring.

The open letter mentioned that this suspension does not mean a general suspension of artificial intelligence development, but a "step back" from the dangerous race. AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, secure, interpretable, transparent, robust, consistent, trustworthy, and reliable.

"The letter is not perfect, but the spirit is correct," according to New York University professor Gary Marcus, who argues that people need to slow down until they can better understand the consequences of all this.

At present, the open letter has been signed by thousands of people, including Emad Mostaque, CEO of open-source artificial intelligence company Stability AI, DeepMind, an artificial intelligence company of Google's parent company Alphabet, computer scientist Stuart Russell and other experts, scholars and technology executives.

Infographic: Musk.

The "Future Life Institute" that issued the open letter is mainly funded by the Musk Foundation and the Silicon Valley Community Foundation. Tesla CEO Elon Musk is applying AI to Autopilot systems, and he has been candid about his concerns.

Reuters also said that Europol has also expressed ethical and legal concerns about advanced artificial intelligence such as ChatGPT, warning that such systems may be misused for phishing, disinformation and cybercrime. In addition, the UK government has also announced proposals for an AI regulatory framework.

On the Internet, the letter also sparked heated discussions, with some netizens agreeing that people need to know what happened, "just because we know we can build it, doesn't mean it should be built".

But there are also doubts about Musk's motives, "Musk signed it because he wants to make money with his artificial intelligence." ”

Another person analyzed, "It's scary, but I don't think some projects need to be stopped." Technology is evolving rapidly, and responsible innovation is necessary. ”

Thinking about the relationship between humans and AI has not stopped since the birth of technology. The famous physicist Hawking also said, "The successful creation of artificial intelligence is probably the greatest event in the history of human civilization." But if we don't learn how to avoid risk, then we put ourselves in a desperate situation."

Nowadays, from passing the qualification exam to "creating" art, AI can do more and more things, and go further and further in the direction of "intelligence". At the same time, however, crimes such as using AI technology to engage in online fraud and extortion and spread illegal information are also emerging.

Different from the long cycle of technological innovation in the physical world, AI technological breakthroughs often spread the world overnight through the Internet. These discussions and concerns will not be superfluous until humanity finds a proper solution to AI ethics, regulation, etc.