"Twitter will no longer be able to allow hate to flow onto its platform with impunity."

In a press release released Thursday, January 20, the French associations at the origin of legal proceedings against the American social network welcomed the decision rendered by the Paris Court of Appeal.

After a first conviction last summer, Twitter International is again summoned to detail its means of combating online hate and discriminatory content.

The company now has two months to respond to the injunction of French justice.

But what is the real significance of this decision?

For Asma Mhalla, specialist in the challenges of the digital economy and lecturer at Sciences Po, the moderation carried out by social networks and their responsibility in the propagation of hate speech must be accompanied by a collective reflection on our digital uses.

How do you analyze the decision rendered this Thursday by the Paris Court of Appeal against Twitter International?

It's a good thing in my opinion. Twitter, like the vast majority of other social networks, is a private company under American law which disseminates and produces information. Except that this information is not always neutral, it can be manipulated and the content published on this platform can be hateful. It therefore seems important and necessary to be able to keep an eye on the means implemented to moderate and delete this hateful content.

It is only on this condition that we can then measure whether these means of organizing the expression of our speech are sufficient or, on the contrary, whether the platforms go too far in their moderation.

On the other hand, beware, moderation should not be considered the Alpha and Omega on this subject of online hatred.

If the mechanisms of virality or the hate speech amplification algorithms used by social networks are a reality, cause and symptom should not be confused.

Technological – or legal – solutionism should not prevent us from questioning the state of our public debate.

Why is a hate speech originally expressed?

We must collectively ask ourselves this question.

Is this decision really binding on Twitter?

Twitter International has two months to respond to French justice on its means deployed in terms of moderation.

But in the past, we have already seen the company refuse to respond to court orders.

The question of the balance of power seems important to me.

It is not the same thing to have a judge, isolated somewhere in France, who is an international company and to have a European mechanism like the one provided for by the DSA, the Digital services act

Having a reference entity in each European state, with the capacity to fine and control platforms, could change this balance of power.

How is this debate expressed elsewhere in the world?

It took place across the Atlantic at the time of the “Facebook Files”

and during the hearings of whistleblower Frances Haugen.

If we broaden the debate to Facebook, we realized that the network operated a moderation differentiated by country.

Content published in the United States and Israel, for example, was very moderate and others, particularly in Arab or Asian countries, were much less so.

Moreover, the difficulty for judicial systems around the world is the gigantism of these platforms.

We cannot put a moderator behind each user and moderation algorithms have already shown that they are fallible.

The challenge for us users is also to question our uses of social networks.

By the Web

Digital: "A significant number of moderators and transparency of algorithms are essential", believes Cédric O

By the Web

Hate online: Twitter will now have to be more transparent and detail its means of fight

  • Justice

  • Social networks

  • By the Web

  • fake news

  • Twitter

  • Gafa

  • 0 comment

  • 0 share

    • Share on Messenger

    • Share on Facebook

    • Share on Twitter

    • Share on Flipboard

    • Share on Pinterest

    • Share on Linkedin

    • Send by Mail

  • To safeguard

  • A fault ?

  • To print