Paris (AFP)

Four anti-discrimination associations brought Twitter on Monday before the Paris court, ruling that the social network failed in an "old and persistent" manner in its obligations in terms of moderation of content, according to a document sent Tuesday to AFP .

"Faced with a 43% increase in hate content on Twitter during the period of confinement, the UEJF, SOS racism and SOS homophobia are acting in summary proceedings against Twitter for non-compliance with its legal obligation of moderation", they explained in a statement.

According to a study conducted by them from March 17 to May 5, "the number of racist content increased by 40.5% (over the period), that of anti-Semitic content by 20% and that of LGBTphobic content by 48%".

The associations also explain having reported to the social network 1,110 hate tweets, mainly homophobic, racist or anti-Semitic insults unequivocally, and found that only 12% of them had been deleted in "a reasonable period ranging from 3 to 5 days" .

"These results are intolerable. (...) What this + testing + shows is a massive inaction on the part of a platform which manifestly refuses to put the human resources necessary for the moderation of the content that its activity generates" , said SOS Racism president Dominique Sopo, quoted in the press release.

The associations ask the court to order the appointment of an expert responsible for ascertaining "the material and human resources used" by Twitter "to fight against the spread of crimes of apology for crimes against humanity, incitement racial hatred, hatred of people on the grounds of their sex, sexual orientation or identity, incitement to violence, in particular sexual and gender-based violence, and attacks on human dignity " .

They thus wish "to dissipate the thick mystery surrounding the composition and the management of the services of regulation of Twitter" and to measure "the extent of the old and persistent casually" on the moderation of the contents.

Contacted by AFP, Twitter assured to invest in moderation technologies "to reduce the burden on users to have to report."

"More than one tweet out of two on which we act for abuse" now comes from automatic detection rather than from a report, said Twitter France public affairs director Audrey Herblin-Stoop in a written statement. "For comparison, this ratio was 1 in 5 in 2018," she added.

Regularly accused of hosting or contributing to the dissemination of hateful or violent content, the major content platforms have been encouraged to set up filtering algorithms, reporting procedures and teams of moderators.

In France, the National Assembly must definitively adopt this Wednesday a controversial bill to fight hate on the internet, which must establish the obligation for platforms and search engines to remove "manifestly" illegal content within 24 hours, under penalty of being fined up to 1.25 million euros. The deadline is reduced to one hour for terrorist or child pornography content.

© 2020 AFP