• On May 15, 2019, Emmanuel Macron and New Zealand Prime Minister Jacinda Ardern brought together 9 heads of state, the European Commission and the leaders of 7 digital companies to launch the “Christchurch call”.

  • Imagined after the attack perpetrated in New Zealand against a mosque and disseminated online on social networks by its author, this initiative aims to fight against the dissemination of terrorist and violent content on social networks.

  • Despite significant progress in this area, technological, economic and political difficulties remain.

On March 15, 2019, horror spread to the screens of Internet users around the world. Broadcast live on Facebook by its author for 17 minutes, the deadly attack on two mosques in Christchurch, New Zealand, aroused great emotion. Two months later, in May 2019, Emmanuel Macron and the New Zealand Prime Minister, Jacinda Ardern, wanted to provide a political response to the height of the stir caused by the event. Gathered in Paris, they launched a solemn appeal to the international community, platforms and civil society.

At the time, the stated ambition was great: Terrorist and violent content must disappear from social networks.

Two years after this "Christchurch call", its members are meeting this Friday during a videoconference summit.

Surveyed upstream to measure the progress of the commitments made in 2019 in terms of moderation, the GAFAM * and the States will take stock of the actions carried out over the past two years and will determine new “priority” axes.

More players and more transparent governance

The first notable contribution since 2019, the number of players engaged in the fight against violent and terrorist content online has increased considerably. About fifty states have joined the call (compared to 9 originally), as have the European Commission, Unesco, the Council of Europe, 10 digital companies and 47 representatives of civil society. The arrival, on May 7, of the United States in the Christchurch appeal was particularly well received. “It's symbolically strong. We are counting on this membership to usefully contribute to the pressure put on the platforms, ”slips a source at the Elysee.

Cradles of popular platforms like Telegram, TikTok, WeChat or VKontakte, two major countries in the digital economy are missing, however: Russia and China. “The states that adhere to this call are in fact engaging in the regulation of Western platforms. However, in Russia and China, these platforms have little impact and market share. The interest of the approach for these two countries is therefore weak ”, explains the researcher in information and communication sciences at the University of Nantes, Olivier Ertzscheid.

Despite these absences, the member states of the appeal have initiated various reforms.

The governance of the “World Internet Forum against Terrorism” also called “GIFCT” has, for example, been improved.

Created in 2017 by Facebook, Microsoft, Twitter and YouTube to pool their efforts in the fight against the dissemination of extremist content, GIFCT was considered not very transparent and too closely linked to these

Silicon Valley companies

.

A central player in cooperation between Internet companies and States, this forum now operates in the legal form of an NGO and an independent advisory committee has been set up.

Stricter crisis protocols and policies

To deal with the online publication of attacks or violent actions, this world forum has defined “crisis protocols”. Activated by platforms in the event of a massive broadcast of live video or in the hours following an attack, this protocol has been used twice from Christchurch. The first time in October 2019, during the broadcast on Twitch (owned by Amazon) of the attack on a synagogue in Halle, Germany, then a second time in May 2020 during the Glendale shooting in Arizona. 

In the progress report of the Christchurch appeal, consulted by 

20 Minutes,

the platforms also claim to have changed their use policies to restrict the possibilities of distributing violent content.

Facebook, for example, has limited the use of its “Live” tool for people who have already broken certain platform rules.

YouTube has similar restrictions in place and Twitch has recently banned content that “depicts, praises, encourages or supports terrorism or violent extremist actors or acts”.

The limits of artificial intelligence

At the same time, social networks communicate more regularly on the withdrawals of this content. Since Christchurch, Facebook says it has "banned" more than 250 white supremacist organizations from its platform. Between October and December 2020, 16 million violent content was also “detected” by Mark Zuckerberg's network and 8.6 million terrorist content. YouTube, for its part, claims progress concerning the exposure of Internet users to violence and terrorism. According to the firm, in early 2017, only 8% of videos withdrawn for "violent extremism" had been viewed less than 10 times at the time of their withdrawal. In the fourth quarter of 2019, this proportion stood at 90%.

But the question of the detection of this content remains central. During recent attacks, such as the one committed in Halle in Germany, the alert did not come from the platforms but from Internet users. To improve its detection technology, Facebook has invested in a university partnership to the tune of 7.5 million dollars. The company also has a cooperation agreement with US and UK governments and police to get videos shot during marksmanship training sessions. The objective: to improve its ability to detect videos filmed from a shooter's point of view, and to avoid moderation errors. “But the technical and human difficulties are the same as two years ago and even ten years ago, explains Olivier Ertzscheid, the fundamental problem being:Can we moderate the flow of information in real time? The answer is no, or very difficult ”.

The "darksocial" and algorithms

If the Christchurch call has had the effect of triggering a dynamic of platforms in the fight against online extremism, the obstacles remain numerous.

“The call is hollow, generic and full of good intentions.

It has no binding dimension for the parties concerned, ”denounces the Nantes researcher.

In addition, one of the projects deemed a priority from 2019, that of the role played by platform algorithms in the dissemination of this violent content, has made little progress.

For a simple reason, argues Olivier Ertzscheid: “The first economic spring of these companies is virality.

As long as this logic remains, divisive, violent, radical content will not disappear.

Hiring thousands of moderators and improving algorithmic filters doesn't solve the problem, it just helps to contain the fire ”.

"The Christchurch appeal has no binding dimension for the parties concerned"

Another difficulty, cooperation with “dark social” platforms such as encrypted messaging (Signal, Telegram) or confidential social networks is struggling to be done. More difficult to spot, violent or terrorist content is, in fact, more difficult to regulate. "One of the areas of work is to increase the number of participants in the call and succeed in getting closer to these small and medium-sized platforms so that they too can prevent the downloading and distribution of this content", provides a source at the Elysee.

In just a few years, these have become privileged sharing spaces for Internet users, but certain messaging services have nevertheless taken measures to slow down virality.

On WhatsApp, since January 2019, the number of simultaneous sharing of content is for example limited to five conversations only.

According to Facebook, this measure led to a 25% drop in the number of message transfers.

Necessary, cooperation alone between platforms and States is no longer sufficient, concludes Olivier Ertzscheid: “Regulation evolves very slowly.

Except, alas, when the amazement seized the public opinion as it was the case after Christchurch.

In recent years, platforms have felt compelled to accelerate not under the effect of political injunctions, but only when the general public and Internet users have urged them to do so ”.

*

Google, Amazon, Facebook, Apple, Microsoft

High-Tech

Terrorism: Europe will give platforms one hour to remove all propaganda content

Society

Attack in Conflans: The government wants to strengthen the role of Pharos, a tool in the fight against "cyber Islamism"

  • Radicalization

  • By the Web

  • Twitter

  • Youtube

  • Social networks

  • Facebook

  • Violence

  • Terrorism

  • Christchurch bombings