Recently, there has been news about ChatGPT and artificial intelligence almost daily. With the release of the more "smart" GPT-4, there has been more attention to the panic of technology and concerns about the ethics of technology. On March 3, China Youth Daily reported on this phenomenon under the title "GPT-27 Rekindles Hot Spots, Tortures the Ethical Boundary of Science and Technology."

The latest news is that on March 3, local time, the Future of Life Institute, a non-profit organization in the United States, released an open letter called "Pausing Giant AI Experiments". In the letter, thousands of AI experts and industry executives called for all AI labs to suspend the development and training of more robust AI systems. At least, a suspension for half a year. They suggested that if such a moratorium could not be implemented expeditiously, "the Government should step in and impose the moratorium".

The potential risks of artificial intelligence to society and human beings have become the consensus of many scientific and technological people, including the "godfather of artificial intelligence" Jeffrey Hinton, Tesla and Twitter CEO Elon Musk, and Turing Award winner Joshua Bensio. To that end, they signed their names on the letter.

"A strong AI system can only mature if we can be confident that the favorable factors of the AI system are positive and the risks are manageable." The open letter reads.

In fact, this is not the first time that Future Life Institute has publicly called for vigilance in the development of artificial intelligence. The group was founded in the United States in 2014 to promote research on "optimistic images of the future" and to "reduce existing risks to humanity." The latter has always been the focus of its attention.

In 2015, physicists Stephen Hawking and Elon Musk and other scientists, entrepreneurs, and investors related to the field of artificial intelligence jointly issued an open letter warning that people must pay more attention to the safety of artificial intelligence and its social benefits.

At that time, AI did not yet present the disturbing "intelligence" it does today. But since then, Musk has said he firmly believes uncontrolled AI "could be more dangerous than nuclear weapons."

Eight years later, in an increasingly volatile economy, the signatories of a new open letter asked, "Should we allow machines to flood our channels with propaganda and lies?" Should we automate all work, including satisfactory work? Should we develop non-human minds that may eventually be more and smarter than we are, eliminating and replacing us? Should we risk losing control of our civilization? ”

Europol also warned on March 3 that AI chatbots such as ChatGPT are likely to be abused by criminals: "The ability of large language models to detect and reproduce language patterns not only helps with phishing and online fraud, but can also be used to impersonate the speech style of a specific individual or group." "Their Innovation Lab has organized several workshops on what criminals might do, listing potentially harmful ways to use them.

The signatories hope that the dangerous steps will "pause" and jointly develop a shared security protocol for advanced AI design and development, rigorously audited and overseen by independent outside experts. AI developers also need to work with policymakers to develop robust AI governance systems.

At the very least, establish a well-prepared and competent regulatory body dedicated to AI.

Zhongqing Daily / Zhongqing Net reporter Zhang Mi Source: China Youth Daily