San Francisco (AFP)

Facebook calls on law enforcement in the United States and the United Kingdom to train its artificial intelligence tools to stop live broadcasts of extremist attacks, such as the killing of Christchurch, which aired during long minutes on the social network.

This initiative, announced on Tuesday by Facebook, is part of the wider framework of measures taken by the social network to clean up "hate and extremist" content and in particular its efforts to add movements or individuals preaching the superiority of race white to his list of "terrorist organizations".

Facebook was heavily criticized for taking 17 minutes to stop broadcasting by a white supremacist, who killed 51 Muslim worshipers on March 15 in Christchurch, New Zealand.

Since then, the company has multiplied its initiatives: restrictions on access to Facebook Live, meetings with politicians, and an alliance with other networks to curb the "diversion of technologies to broadcast terrorist content."

The London police will help from October to allow Facebook to better train its artificial intelligence tools to quickly detect these contents and delete them.

The difficulty is that the "machine" must be able to tell the difference between an attack in real life and a movie or video game scene.

The images filmed by the cameras worn by the units of the "Met" during their training in shooting will feed and enrich the bank of images that Facebook has already formed through law enforcement in the United States.

Artificial intelligence tools need huge amounts of data - here shots images - to learn how to recognize, sort and ultimately delete them in this case.

After putting a lot of resources to fight the use of its network by organizations such as al-Qaeda or the Islamic State group, Facebook has recently focused on white supremacism, whose supporters are behind many killings perpetrated in recent years in the United States.

The network recalls having banned 200 white supremacist organizations.

© 2019 AFP