Posting a message of support on the Internet to the perpetrator of a terrorist act is heavily punished by law.

-

Jenny Kane / AP / SIPA

  • Since the death of Samuel Paty in a terrorist attack in Conflans-Sainte-Honorine, pressure has intensified around major platforms, accused of letting hatred spread online and violent speech.

  • 20 Minutes

    interviewed teacher and researcher Romain Badouard, who publishes 

    Les nouvelles lois du Web.

    Moderation and censorship

     at Editions du Seuil.

  • "It is urgent to invent a democratic regulation of content on the Internet, so that it remains for everyone a space for debate, commitment and freedom", explains the researcher.

Pointed out after the assassination of Samuel Paty in a terrorist attack in Confans-Sainte-Honorine, social networks are today in the hot seat.

In recent days, the issue of moderation of content by major platforms has once again been at the center of political debates in France.

And the pressure is mounting around the web giants, accused of letting hatred spread online and violent speech.

How can we prevent such content from running on social networks today and prevent it from becoming viral?

"Faced with the rise of disinformation and hate speech, a new regulation must now be put in place", estimates Romain Badouard, lecturer in information and communication sciences at the University of Paris- II (Panthéon-Assas), and researcher at the Carism laboratory of the French Press Institute.

“It is urgent to invent a democratic regulation of content on the Internet, so that it remains for everyone a space for debate, commitment and freedom,” he adds.

20 Minutes

interviewed the researcher, who has just released

The New Laws of the Web.

Moderation and censorship

 at the Threshold *.

The question of moderation on social networks has been at the center of the news since the death of Samuel Paty.

Do the platforms have a responsibility in the assassination of the professor?

Social networks have a form of responsibility in the gear that led to this assassination, both directly and indirectly.

Several questions arise, in particular concerning the terrorist's Twitter account, which has been reported several times by users.

And the video of the student's parent that it was reported to the Pharos platform.

What happened after these reports?

Has this content been reviewed by moderators?

What were their decisions?

For now, all this is still rather vague.

In any case, what happened in this affair clearly shows that it is urgent to act to strengthen the moderation of the major platforms.

What should be done to fight online hate?

Implement new measures, strengthen the Pharos platform ...

After Samuel Paty's assassination, the government was quick to announce new measures.

But there are already laws that exist, especially on cyberbullying.

For me, the urgency today is rather what means are given to enforce what already exists.

The Pharos platform is only 25 to 30 officials compared to the flow of reports received.

It is just insufficient.

We have also been talking for years about the establishment of a specific prosecution service in France on the issue [the government has confirmed the upcoming establishment of a specialized pole within the Paris prosecution, a survivor of the Avia law].

And then there is a question which is just as fundamental: how to influence the economy of attention [the fact of promoting radical, violent, indignant content…], on which the economic model of these platforms is based?

To fight against hateful content and “cyber Islamism”, Marlène Schiappa wants to set up “a republican cyber-brigade”.

Is it a good idea to interfere in the online discussions of Internet users?

It is also a way of regulating social networks.

This is an interesting proposition which shows that Internet users must also participate in the moderation effort.

This logic of counter-discourse is not new.

Associations and groups of Internet users are already organizing themselves to bring up the contradiction in the face of hateful or violent speech on the Internet.

Their goal is above all to attract the attention of those we call the “silent majority”, that is to say all those people who see the conversations unfold without taking part in them.

It is necessary work when it comes from Internet users.

But if it's some form of state-organized response, I'm not sure it's having the same success.

Some will see it as an attempt to intrude the government into online debates.

You explain in your book that it is urgent to invent “democratic regulation” of content on social networks.

But how ?

Social networks have today become one of the main arenas of public debate.

The question is not whether to regulate them, but how and with whom we do it.

When we talk about regulation, we tend to take it from the angle of relations with platforms.

But this is a much broader topic, which also encompasses the question of how the internet advertising market works, and how advertisers can have a role to play in combating hate speech and falsehood. information.

It is also a subject which concerns Internet users themselves, and civil society associations.

And so the challenge today is to create forms of governance that can take into account these different actors.

But also to find the right balance between the fight against online hatred and freedom of expression.

The problem of online hate, and therefore moderation, knows no borders, and affects all countries.

Should they be legislated at national, European or even global level?

States must participate in this regulation, that is a certainty.

For many years it has been said that governments are powerless in the face of platforms, which is not true.

The new laws which are being put in place, in France but also in Europe, prove it today.

Admittedly, the Avia law was retorted for one of these main provisions deemed to be liberticidal, but other regulatory laws have been passed, such as that on "false information".

And then when we look at the European level, several countries have already passed legislation which imposes new standards on platforms.

The role of the state is therefore to set new rules to which social networks must comply.

And this is also the role of the European Commission with the 

Digital Services Act

[European text aimed at regulating platforms and their content to be presented on December 2], which aims to set common operating rules for social networks.

The “opacity” of algorithms and moderation systems in social networks is a real concern today.

How to force platforms to be more transparent?

There is still some way to go in terms of transparency.

However, the platforms have made progress on the subject over the past three years, some regularly publishing transparency reports and further detailing their moderation policy in their terms of use.

But all this remains very opaque, and poses a problem.

The public authorities should be able to benefit from an audit power in relation to what the platforms do: how the moderators go about it, how the algorithms prioritize the content… It is now fundamental to know how it works platform moderation policies to be able to take effective action on "problematic" content.

In fact, excessive moderation can lead to a form of censorship.

Some say that the web giants could become enemies of free speech ...

If the platforms do not do enough, they are accused of laxity, and of letting violence and hatred spread online.

And if they act too much, they are accused of censorship.

It's a difficult balance to find which, in my opinion, will come from the ability to involve citizens in the governance of the platforms.

It may seem utopian, but users should be given a right to be heard in decisions concerning the publication policies of social networks, and the right to be able to appeal decisions taken.

A platform like Facebook is evolving in this direction today, in particular with the establishment of its "supreme court", made up of a college of independent personalities who have the power of constraint on Facebook's publication policies. .

There is a lot of talk today about moderation.

But the main concern, is it not rather the virality of the content?

Moderation isn't just about removing content or leaving it online.

It is also to limit its distribution.

Many platforms, such as YouTube or Facebook, play precisely on the visibility of certain information deemed "nauseating" by limiting their virality, that is to say by not making them go up in the news feeds and by restricting their recommendation. by algorithms.

This is a very effective technique for limiting the publication of problematic content.

But, as we have already said previously, there should now be transparency on the automated management of this moderation.

When we see that the American election will essentially be played out on social networks, does not all this reflection on the regulation of platforms come today a little late?

If this subject is coming up in the debate today, it is because public opinion has taken up this question.

Today, citizens feel concerned, journalists talk about it, and states too are trying to make themselves heard.

It is all this that creates a climate conducive to regulation, which was not necessarily the case before the 2016 American presidential election, where we did not yet ask ourselves all these regulatory questions.

We can see how dangerous it is today to do nothing.

Even the platforms, previously reluctant, have understood that they have to get around the table, especially because of new pressure from advertisers.

By the Web

What does the “Republican counter-discourse unit” which will soon be implemented on social networks consist of?

By the Web

"No country in the world has yet solved the problem of online hate", believes Cédric O, Secretary of State for Digital

* The new laws of the Web.

Moderation and censorship, 

by Romain Badouard, editions du Seuil, October 29, 2020, 128 pages, 11.8 euros.

  • By the Web

  • Regulation

  • Facebook

  • Twitter

  • Violence

  • Platform

  • Social networks

  • Samuel paty

  • Internet