The nightmare, inspired by countless science fiction movies, begins when machines see their abilities surpass those of humans and spiral out of control.
"As soon as we have machines that are trying to survive, we will have problems," Canadian researcher Yoshua Bengio, one of the fathers of machine learning, recently said.
According to a variant imagined by the Swedish philosopher Nick Bostrom, the decisive moment will come when machines know how to make other machines themselves, causing an "explosion of intelligence".
According to his "trombone's theory", if an AI had the ultimate goal of optimizing the production of this stationery accessory, for example, it would end up covering "first the Earth and then increasingly important pieces of the Universe with paper clips," he illustrates.
Nick Bostrom is a controversial figure, having claimed that humanity could be a computer simulation, or supporting theories close to eugenics. He also had to apologize recently for a racist message sent in the 90s, which had resurfaced.
Yet his ideas about the dangers of AI remain highly influential, and inspired both billionaire Elon Musk, boss of Tesla and SpaceX, and physicist Stephen Hawking, who died in 2018.
The image of the red-eyed cyborg from "Terminator", sent from the future by an AI to put an end to all human resistance, has particularly marked the collective unconscious.
The T-800 terminator robot from the movie "Terminator 2", March 18, 2009, in Tokyo, Japan © / AFP/Archives
But according to experts in the "Stop Killer Robots" campaign, this is not the form in which autonomous weapons will prevail in the coming years, they wrote in a 2021 report.
"Artificial intelligence will not give machines the desire to kill humans," reassures robotics specialist Kerstin Dautenhahn, of the University of Waterloo in Canada, interviewed by AFP.
"Robots are not evil," she says, while conceding that their developers could program them to do harm.
A less obvious scenario is that artificial intelligence is used to create toxins or new viruses, with the aim of spreading them around the world.
A group of scientists who used AI to discover new drugs conducted an experiment in which they modified it to look for harmful molecules instead.
In less than six hours, they managed to generate 40,000 potentially toxic agents, according to an article in the journal Nature Machine Intelligence.
With these technologies, someone could finally find a way to spread a poison such as anthrax faster, said Joanna Bryson, an AI expert at Berlin's Hertie School.
"But it's not an existential threat, just a terrible weapon," she told AFP.
An outdated species
In apocalypse movies, disaster happens suddenly and everywhere at once. But what if humanity gradually disappeared, replaced by machines?
"In the worst case, our species could become extinct without a successor," philosopher Huw Price said in a promotional video from the Cambridge University Centre for the Study of Existential Risks.
A humanoid robot presented in Hong Kong, China, May 10, 20223 © Peter PARKS / AFP
There are, however, "less bleak possibilities," where humans augmented by advanced technology could survive. "The purely biological species then ends up going extinct," he continues.
In 2014, Stephen Hawking argued that our species would no longer be able to compete with machines, telling the BBC that this would "sound the death knell of the human race".
Geoffrey Hinton, a researcher who is trying to create machines resembling the human brain, lately for Google, has referred in similar terms to "superintelligences" superior to humans.
On PBS, he recently said that it was possible that "humanity is only a passing phase in the evolution of intelligence".
© 2023 AFP