The Paris Court of Appeal has indeed confirmed an interim order from the court which ordered the social network in July to detail its mechanisms for moderating and combating hateful and discriminatory comments.

Twitter had been summoned in May 2020 before the Paris court by SOS Racisme, the International League against Racism and Anti-Semitism (Licra) and SOS Homophobia, among others, considering that the company was "old and persistent" lacking in its obligations of moderation.

"By its decision, French justice demonstrates that the GAFA cannot impose their own law", welcomed the associations in a joint press release.

"Twitter will finally have to take responsibility, stop tackling and think ethics rather than profit and international expansion," they add.

“Twitter is studying the decision that was rendered by the Paris Court of Appeal, said the social network for its part. Our top priority is to ensure the safety of people using our platform. We are committed to building a more secure Internet. secure, combat online hate and improve the serenity of public conversation."

"Reports"

By confirming the decision of last July of the Paris Criminal Court, French justice therefore orders Twitter international to communicate "any administrative, contractual, technical or commercial document relating to the material and human resources implemented" to "fight against the dissemination offenses of advocating crimes against humanity, incitement to racial hatred, hatred against persons on the basis of their sex".

In detail, the Irish law company must also detail "the number, location, nationality, language of the people assigned to the processing of reports from users of the French platform", "the number of reports", "the criteria and the number of subsequent withdrawals" as well as "the number of information transmitted to the competent public authorities, in particular to the public prosecutor's office".

In its judgment, the Court of Appeal also condemns the network to tweets to pay 1,500 euros in damages to several of the associations involved in the procedure.

The associations based their request on the law for confidence in the digital economy (LCEN) of 2004, which requires platforms to "contribute to the fight" against online hatred and in particular to "make public the means they devote in the fight against these illicit activities".

"Interference"

In particular, they had produced several reports by bailiffs dating from 2020 and 2021. In the most recent, from May 20 to 23, 2021, "only 28 of the 70 hateful tweets notified were removed by Twitter after forty-eight hours", had they noted.

For Twitter, which contested this method of "testing", this approach taken by the associations fell within the "interference in the management of a company" going against the "freedom of enterprise", had supported from the first mediation his lawyer , Me Karim Beylouni.

In early September, the social network, which has around 12.8 million monthly active users in France, launched a "security mode".

This feature blocks accounts for seven days using "potentially harmful language" such as insults, hateful remarks or "repetitive and unsolicited mentions".

© 2022 AFP