Elon Musk, boss of Tesla, Steve Wozniak, co-founder of Apple, Andrew Yang, former Democratic presidential candidate, or two recipients of the Turing Prize, the equivalent of the Nobel Prize in the field of artificial intelligence (AI): all are among the more than 1,000 personalities and scientists who have signed an open letter calling for an urgent pause in the creation of language models like ChatGPT. A few hours after the launch of its latest version on Wednesday, March 29.

"It's really time to take a step back, and think about the implications of these technologies that are currently being put in the hands of millions of people," said Jaromír Janisch, an artificial intelligence specialist at the Czech Polytechnic University in Prague, and one of the signatories of the text.

A letter with apocalyptic overtones

For the authors of this open letter, AIs such as ChatGPT are not just lovable, hyper-gifted "chatbots" to chat with or able to help a student cheat in class. "We must immediately pause [...] the development of systems more powerful than GPT-4 [the latest iteration of the algorithm that allows ChatGPT to work]," the document reads.

A further step in this direction could lead humanity to "develop non-human consciousnesses that would render us obsolete and replace us," the authors of the document write. For them, what is at stake is "the loss of control over the future of our civilization".

Apocalyptic accents in line with the ideology promoted by the Future of Life Institute, the think tank behind this initiative. Very influential in Silicon Valley and funded in part by Elon Musk, this institute advocates the search for technological solutions to save humanity from "existential" threats... such as a deadly AI.

The center's approach is based on a controversial philosophy, "long-termism," which notably inspired Sam Bankman-Fried, the ousted boss of the FTX cryptocurrency empire. Proponents of this thinking believe that humanity must be protected at all costs from what could make it disappear so that it can reach its full potential in the very long term. Its detractors warn against the excesses of such an ideology that could justify, for example, the establishment of mass surveillance in order to "protect" man without his knowledge.

"This open letter is a nameless mess that rides the media wave of AI without addressing the real issues," said Emily M. Bender, a researcher at the University of Washington and co-author of a landmark paper on the dangers of AI published in 2020. "The prospect of overly powerful superintelligence or non-human consciousness is still largely science fiction. There is no need to play on the fears of a hypothetical future when artificial intelligence already represents a danger to democracy, the environment and the economy," says Daniel Leufer, a specialist in innovative technologies working on the societal challenges posed by AI for the Internet rights NGO Access Now.

The pause is necessary

Ironically, some of the signatories of the appeal interviewed by France 24 also have reservations about the document or its philosophy. "The letter may not be perfect, but there is no doubt that we need to take the time to reflect now," said Carles Sierra, president of the European Association for Artificial Intelligence (EurAI) and director of the AI research unit at the Autonomous University of Barcelona.

For all, the break is necessary. "Everything is moving too fast in the adoption of these technologies, while there is no real debate within the scientific community on the subject," said Joseph Sifakis, research director at the University of Grenoble and the only French holder of a Turing Prize (received in 2007).

It is as if scientists have been caught off guard by the speed at which their creation is growing. "The explosion of 'deep learning' [or 'deep learning', that is to say the ability of AI to draw information from a database and create connections between this information, editor's note] a decade ago changed everything," says David Kruger, an expert in artificial intelligence at the University of Cambridge.

Advances in "deep learning" then led "about five years ago to the first impressive practical results of text generation thanks to language models [such as ChatGPT, editor's note]," adds Vincent Corruble, a computer scientist at the Sorbonne University.

For the authors of the open letter, GPT-4 would represent a tipping point. "Scientists believe that as soon as there are signs of the appearance of artificial general intelligence (AGI), it will become necessary to take a break. And for some, with GPT-4 we are starting to have them," explains Vincent Corruble.

AGI would be a machine capable of doing as well or better than humans in most intellectual fields, and not just in certain specialized tasks such as beating a human in chess or recognizing faces in a photo. GPT-4 would start to look like it because it already has more than one trick up its sleeve. He is thus able to both discuss and transform sentences into images.

But no one believes that a general artificial intelligence is only an unattainable chimera. We might as well be concerned about the threats that would already be on our doorstep.

ChatGPT is not telling the truth

"We have not yet thought at all, for example, about solutions to compensate for all the job losses that the use of AI will generate," said Grigorios Tsoumakas, an expert in artificial intelligence at the Aristotle University of Thessaloniki. More than 300 million employees worldwide could lose their jobs because of the automation of tasks, said the bank Goldman Sachs in a new study published Monday, March 27.

These AIs put today in the hands of millions of Internet users as so many digital toys are not the safest either. "We have seen how easily experts have been able to circumvent the few security measures put in place on these systems. What would happen if terrorist organizations manage to seize it to create viruses, for example?" asks Grigorios Tsoumakas.

Cybersecurity risks are symptomatic of a broader problem with these systems, according to Sifakis. "We cannot make tools available to the public so easily when we do not really know how these AIs will react," said the expert.

Not to mention that ordinary users should be educated. Indeed, "there may be a tendency to believe that the answers of these systems are true, when in reality these machines are simply trained to calculate the most likely sequence to a sentence to make it look most human. It has nothing to do with true or false," Sierra said.

Enough to make these tools formidable weapons of massive disinformation. But not only: "It is urgent to ask what will happen when people start making decisions that will impact their lives based on the responses of its AIs," adds Joseph Sifakis. What, for example, if a judge asks GPT-4 what is the best sentence to give in a case?

"Irresponsible" Big Tech

So many reasons to take a break. But for what? First, "we must realize that there is no recognized test to assess whether a system is dangerous or safe," says David Kruger.

We must therefore rely on the assurances of Meta (Facebook), Google or Microsoft (which has invested billions in OpenAI, the creator of ChatGPT). For them, everything works perfectly. "But they have entered a chase phase that means that the security of systems will probably not be their priority over economic objectives," said Daniel Leufer of the NGO Access Now.

"Big Tech currently has a reckless and irresponsible attitude in deploying this technology. That's why we need to give the scientific community and governments time to find the right brakes to put on this AI race," says David Kruger.

The open letter calls for a six-month moratorium on the development of new versions of these AIs. A duration that can be perplexing, "but it is only a starting point to try to have a better evaluation of what exists. Once the deadline has passed, there will always be time to extend it," says Vincent Corruble.

These tech giants swear they are capable of self-regulation. But at the same time, "the teams responsible for assessing whether AI research conducted within Meta, Google or Microsoft is ethically justified were among the first to be dissolved when these companies sought to reduce the workforce," says Grigorios Tsoumakas.

And then these large groups may not be aware of the long-term consequences of their innovations. "If Facebook had thought of consulting teen specialists at the time they introduced the 'Like' button, we might have been able to explain to them the devastating effect it would have on the mental health of young people in the face of this pressure from social networks," Sierra said. In other words, prevention is better than cure, especially since with technology that can change everything, healing may be very difficult.

The summary of the week France 24 invites you to look back on the news that marked the week

I subscribe

Take international news with you everywhere! Download the France 24 app