San Francisco (AFP)

Facebook has recruited police on both sides of the Atlantic to educate its artificial intelligence tools to stop video broadcasts of extremist attacks live on its platform, like the killing of Christchurch, broadcast for long minutes on the social network.

This initiative, announced on Tuesday by the world's leading social network, is part of the wider framework of the social network's efforts to clean up "hate and extremist" content and in particular its efforts to add the movements or individuals preaching white supremacism to its list of "terrorist organizations".

The network founded by Mark Zuckerberg was heavily criticized for taking 17 minutes to stop the live video of a white supremacist who filmed while attacking a mosque on March 15 in Christchurch, Nova Scotia. Zealand. He killed 51 Muslim faithful.

Since then, the company has multiplied its initiatives: restrictions on access to Facebook Live, meetings with politicians, and an alliance with other networks to curb the "diversion of technologies to broadcast terrorist content."

The London police will help from October to allow Facebook to better train its artificial intelligence tools to quickly detect these contents and delete them.

The difficulty is that the "machine" must be able to tell the difference between an attack in real life and a movie or video game scene.

The images filmed by the cameras carried by the London police units during their training in shooting will feed and enrich the bank of images that Facebook has already built through US law enforcement.

Artificial intelligence tools need huge amounts of data - here shots - to learn how to correctly identify, sort and ultimately delete them.

- "Terrorism" in the broader sense -

After putting a lot of resources into fighting the use of its network by organizations like the Islamic State group, Facebook has recently focused on white supremacism, whose followers have been responsible for many killings in recent years. , especially in the United States.

The network recalls having banned 200 white supremacist organizations and expanded its definition of what constitutes a "terrorist" organization by calling on experts. "The new definition remains focused on the behavior, not the ideology of these groups", but it is now broadened to "acts of violence in particular directed against civilians with the intention of compelling and intimidating them".

Facebook has also expanded the missions of a team of 350 experts in law enforcement, national security, anti-terrorism, but also academics specializing in the study of the phenomenon of radicalization to curb the efforts of people and organizations that call for violence or commit violent acts that impact the real world "and not just online.

In the same vein, the company has extended to Ausalia and Indonesia an initiative launched in March in the United States, to refer users who do research containing keywords associated with white supremacy to a site. of "de-radicalization". To measure the effectiveness of these initiatives, Facebook has partnered with Moonshot CVE (Countering Violent Extremism), a young British company that uses data mining to target users with extremist views.

Moonshot CVE has developed a method for redirecting these extremists to sites with neutral information, or using recognized personalities or people who have left radicalized groups to try to change their minds.

© 2019 AFP