• Six associations sued the social network before the Paris court in May 2020, arguing that the company was “old and persistent” in its obligations in terms of moderation.

  • In a decision rendered in early July, the summary judge ordered Twitter to communicate, within two months, documents detailing its means to fight against online hatred.

  • During the appeal hearing held this Thursday before the Paris Court of Appeal, the social network for the first time indicated that it employed “1,867 moderators” around the world, but refused to detail its means of moderation in France.

This is a figure that the social network has refused to reveal until now.

Twitter was forced by the French justice to reveal the number of people assigned to the moderation of its content.

Assigned in summary by several anti-racist associations for breaches of its obligations of moderation in terms of hate messages, the platform officially indicated on Thursday that it employed "1,867 moderators in the world", for approximately 400 million monthly users, that is to say a moderator for 200,000 Internet users.

“Despite a decision of the Paris court ordering him to disclose all documents detailing his means of fighting moderation against hatred online in France, Twitter has once again defiled.

The social network refused to say how many moderators it employed in France, what their training was, how its algorithms worked… contenting itself with just saying that it had recourse to less than 2,000 moderators in the world ”, explains

20 Minutes

Samuel Lejoyeux, president of the UEJF (Union of Jewish Students of France), who with SOS Racisme, Licra, SOS Homophobie, J'accuse and Mrap lodged a complaint in May 2020 against the platform.

Twitter favors moderation "by algorithms"

The social network defended, during its hearing held this Thursday at the Paris Court of Appeal, its hate content moderation system. “We recently doubled the number of people responsible for enforcing our rules. At Twitter, 1,867 people are dedicated exclusively to enforcing our policies and moderating content. This figure represents more than a third of all our global workforce ”, we can read in the conclusions of Twitter International Company. Figures that were also mentioned in a report sent to the CSA (Superior Audiovisual Council) in September 2021, but which had never been publicly disclosed until today.

Twitter justifies its low number of human moderators by the use of artificial intelligence considered more effective.

"We will not solve the challenge of large-scale moderation with only more human resources.

We have found that we are much more effective in the fight against harmful content by using more technology and proportionately increasing our teams, ”explained Twitter in this report on the fight against disinformation published by the CSA.

Only 12% of hate content removed during the first lockdown

The UEJF, SOS Racisme or SOS Homophobie had decided to file a complaint after noting in 2020 a 43% increase in hate content on Twitter during the period of the first confinement. According to a study conducted from March 17 to May 5, 2020 by these associations, "the number of racist content increased by 40.5% (over the period), that of anti-Semitic content by 20% and that of LGBTphobic content by 48%" . The associations had also reported to the social network 1,110 hateful tweets, mainly homophobic, racist or unequivocal anti-Semitic insults, and noted that only 12% of them had been deleted in "a reasonable period ranging from 3 to 5 days".

“Twitter shows no real desire to fight hate on its platform (racism, anti-Semitism, homophobia). Everyone can see it, every day by going to the platform. We want the social network to comply with French law. We demand transparency, and precise information on their daily moderation ”, specifies the president of the UEJF, very confident on the judgment which will be rendered on January 20 next.

Regularly accused of hosting or contributing to the dissemination of hateful or violent content, the major content platforms have been encouraged to set up filtering algorithms, reporting procedures and teams of moderators.

But Twitter has refused for years to communicate on the means implemented concerning its moderation.

The social network, however, assured to invest in moderation technologies "to reduce the burden on users of having to make a report", specifying that "more than one in two tweet on which we act for abuse" now comes from automatic detection rather than reporting.

By the Web

Why so much hatred on social networks during confinement?

By the Web

"This attitude is irresponsible and dangerous" ... Twitter sued for its inaction in the face of hate speech

  • By the Web

  • Violence

  • Anti semitism

  • Racism

  • Social networks

  • Justice

  • Cyber ​​harassment

  • Twitter

  • 0 comment

  • 0 share

    • Share on Messenger

    • Share on Facebook

    • Share on twitter

    • Share on Flipboard

    • Share on Pinterest

    • Share on Linkedin

    • Send by Mail

  • To safeguard

  • A fault ?

  • To print