The Assembly must adopt a controversial anti-hate law on the Internet this Wednesday, which must introduce the obligation for platforms to remove “manifestly” illegal content within 24 hours - Clément Follain / 20 Minutes

Four anti-discrimination associations brought Twitter on Monday before the Paris court, ruling that the social network was failing in an "old and persistent" way in its obligations in terms of moderation of content, according to a document sent Tuesday to AFP .

"Faced with a 43% increase in hate content on Twitter during the period of confinement, the UEJF, SOS racism and SOS homophobia are acting in summary proceedings against Twitter for non-compliance with its legal obligation of moderation", they explained in a statement.

According to a study conducted by them from March 17 to May 5, "the number of racist content increased by 40.5% (over the period), that of anti-Semitic content by 20% and that of LGBTphobic content by 48%". The associations also explain having reported to the social network 1,110 hate tweets, mainly homophobic, racist or anti-Semitic insults unequivocally, and found that only 12% of them had been deleted in "a reasonable period ranging from 3 to 5 days" .

"The thick mystery surrounding Twitter's regulatory services"

“These results are intolerable. (…) What this "testing" shows is massive inaction on the part of a platform that clearly refuses to put the human resources necessary for moderating the content that its activity generates, "said the president of SOS Racism Dominique Sopo, cited in the press release.

The associations ask the court to order the appointment of an expert responsible for ascertaining "the material and human resources implemented" by Twitter "to fight against the dissemination of crimes of apology for crimes against humanity, incitement racial hatred, hatred of people on the grounds of their sex, sexual orientation or identity, incitement to violence, in particular sexual and gender-based violence, and outrages upon human dignity ” .

They thus wish to "dissipate the thick mystery surrounding the composition and management of Twitter's regulatory services" and measure "the extent of the old and persistent casualness" on the moderation of content.

More automatic detection than reports

Contacted by AFP, Twitter assured investing in moderation technologies "to reduce the burden on users to have to report." "More than one tweet on two on which we act for abuse" now comes from an automatic detection rather than a report, said the director of public affairs of Twitter France Audrey Herblin-Stoop in a written statement. "For comparison, this ratio was 1 in 5 in 2018," she added.

Regularly accused of hosting or contributing to the dissemination of hateful or violent content, the major content platforms have been encouraged to set up filtering algorithms, reporting procedures and teams of moderators.

In France, the National Assembly must definitively adopt this Wednesday a controversial bill to fight hate on the internet, which must introduce the obligation for platforms and search engines to remove "manifestly" illegal content within 24 hours, under penalty of being fined up to 1.25 million euros. The deadline is reduced to one hour for terrorist or child pornography content.

  • Homophobia
  • Coronavirus
  • Discrimination
  • Society
  • Confinement
  • Antisemitism
  • Racism
  • Twitter