Is the lightning development of artificial intelligence dangerous? This is in any case what Elon Musk and hundreds of global experts think who signed, Wednesday, March 29, a call for a six-month pause in research on artificial intelligences more powerful than ChatGPT 4, the OpenAI model launched in mid-March, evoking "major risks for humanity".

In this petition published on the Futureoflife.org website, they call for a moratorium until the implementation of security systems, including new dedicated regulatory authorities, monitoring of AI systems, techniques to help distinguish the real from the artificial and institutions capable of managing the "dramatic economic and political disruption (especially for democracy) that AI will cause".

The petition brings together personalities who have already publicly expressed their fears about uncontrollable AIs that would surpass humans, including Elon Musk, owner of Twitter and founder of SpaceX and Tesla, and Yuval Noah Harari, the author of "Sapiens".

Yoshua Bengio, a Canadian AI pioneer, also a signatory, expressed his concerns at a virtual press conference in Montreal: "I don't think society is ready to face this power, the potential for manipulation of populations, for example, that could endanger democracies."

"We must therefore take the time to slow down this trade race that is on the way," he added, calling for these issues to be discussed at the global level, "as we have done for energy and nuclear weapons".

"Society needs time to adapt"

Sam Altman, head of OpenAI, developer of chatGPT, himself admitted to being "a little scared" by its creation if it was used for "large-scale disinformation or cyberattacks".

"Society needs time to adapt," he told ABCNews in mid-March.

"Recent months have seen AI labs locked into an uncontrolled race to develop and deploy ever more powerful digital brains that no one – not even their creators – can reliably understand, predict or control," they said.

>> Read also: ChatGPT artificial intelligence and the democratization of cybercrime

"Should we let machines flood our information channels with propaganda and lies? Should we automate all jobs, including those that are rewarding? Should we develop non-human minds that could one day be more numerous, more intelligent, make us obsolete and replace us? Should we risk losing control of our civilization? These decisions should not be delegated to unelected technology leaders," they conclude.

The signatories also include Apple co-founder Steve Wozniak, members of Google's AI lab DeepMind, Stability AI boss Emad Mostaque, OpenAI's competitor, as well as American AI experts and academics, senior engineers from Microsoft, an OpenAI ally.

With Reuters and AFP

The summary of the week France 24 invites you to look back on the news that marked the week

I subscribe

Take international news with you everywhere! Download the France 24 app