The Dutch NGO PAX has identified and categorized the main players in this highly strategic sector into different categories. The category "high concern" lists companies, such as Amazon and Microsoft, that are trying to contract with the Pentagon.

The Americans Amazon, Microsoft and Intel are among the technological giants that could lead a huge arms race in the field of artificial intelligence, according to an NGO report on autonomous weapons of destruction.

"Highly controversial" weapons

The Dutch organization PAX surveyed the main players in this highly strategic sector and ranked 50 companies according to three criteria: do they develop technologies to create "killer robots"? Do they work on military projects related to these technologies? Have they promised to abstain or contribute in the future?

The use of artificial intelligence to allow weapon systems to automatically choose and attack targets has caused significant ethical debate in recent years. For some critics, one could even witness the third revolution of the art of war after the inventions of the powder and the nuclear bomb. "Why are companies like Microsoft and Amazon not denying that they are currently developing these highly controversial weapons that could decide to kill themselves, without human involvement?" Asks Frank Slijper, principal author of the report published Monday.

Google refuses to get involved

Twenty-two companies represent a "medium concern" for the report's authors, whose analysis covers twelve countries around the world. Among them, the Japanese SoftBank, especially known for its humanoid robot Pepper.

The "high concern" category includes 21 companies, including Amazon and Microsoft, both of which are trying to sign a contract with the Pentagon to provide the US military with the infrastructure of its "cloud" data storage service. line.

"Autonomous weapons will inevitably become weapons of mass destruction," Stuart Russell, professor of computer science at the University of California at Berkeley, told AFP. "Work is currently underway to ensure that everything that currently constitutes a weapon - tanks, fighter planes, submarines - has its standalone version," he adds.

Google and six other companies are in the "good practice" category. Last year, Google gave up its bid for the Pentagon's "cloud" deal because it could be in contradiction with its "principles" in artificial intelligence. The California giant had explained that it did not want to be involved in "technologies that are or could be harmful" and "weapons or other technologies whose main purpose or implementation would cause or facilitate the physical injury to people" .

Towards an international ban?

On Tuesday at the United Nations in Geneva, a panel of government experts debated policy options for regulating autonomous weapons, although it has been very difficult to reach consensus on the issue so far.

According to the NGO PAX report, Microsoft employees have also signaled their opposition to a contract with the US military concerning HoloLens augmented reality glasses for training and combat.

Many worries also surround the future of autonomous armaments, weapons that have not yet been invented but are presented in some science fiction films, such as mini-drones. "With this kind of weapons, you could send a million in a container or a cargo plane and they would have the destructive capacity of a nuclear bomb, but leave all the buildings intact behind them," thanks to facial recognition says Stuart Russell.

In April, the European Commission proposed a set of ethical rules for the artificial intelligence sector. The list of proposals provided recalled the need to place "the human" at the heart of AI (artificial intelligence) technologies and the need to promote "non-discrimination", as well as the "well-being of society". and the environment ". For Stuart Russell, the next step is an international ban on killer artificial intelligences. "Machines that can decide to kill humans should not be developed, deployed or used," he says.