Controversial, the text of Laëtitia Avia provides that platforms and search engines will have the obligation to remove hateful content - ATTA KENARE / AFP

  • Article 1 of a LREM bill requires social networks, collaborative platforms and search engines to delete hateful content within 24 hours.
  • Coupled with the fear of a fine, this mechanism could encourage platforms to delete more content with less attention and to use more algorithms.
  • With a lack of transparency on social networks regarding moderation modalities, some specialists fear an obstacle to freedom of expression.

Parliament must definitively adopt, by the end of January, the law proposal by deputy LREM Laëtitia Avia, intended to combat hate content online. If the last examination of the text in the National Assembly dismissed the messages related to trafficking in human beings and pimping of the field of the law, the deputies, on the other hand, did not touch the flagship and controversial measure of the bill. This obliges platforms and search engines to remove “manifestly” illegal content within 24 hours.

Among them: provocations to terrorism, incitement to hatred or violence, discriminatory comments, or even insults of a racist, homophobic or religious nature. This measure is the main point of friction between the Assembly and the Senate which simply wants to remove it.

Towards arbitrary censorship?

It also raises reservations with various organizations, such as the National Digital Council, the National Consultative Commission for Human Rights or La Quadrature du Net, who see it as an attack on freedom of expression. The application of this text promises to be complex, limited and must meet certain conditions, according to various experts.

“The law indicates that it is up to social platforms and networks, which are private actors, to judge their content. The terms "manifestly illicit" imply an element of arbitrariness and leave a margin of appreciation ", decrypts Nikos Smyrnaios, lecturer in information and communication sciences at the University of Toulouse-III. He fears that the short time imposed to delete this content, coupled with the fear of a fine - also introduced by the text and up to 1.25 million euros - may push platforms to delete content at the slightest doubt, participating in a generalization of censorship.

Artificial intelligence cannot do everything

"The smallest platforms do not have the same means as the large ones in terms of moderation and are therefore not able to respond correctly to these requirements", continues the researcher. As for web giants, the most effective response to meet the deadline may be based on algorithms. A technology on which experts urge caution.

In a report released on Wednesday, the independent organization ISD (Institute for Strategic Dialogue) provides an overview of the nature of hate speech published on social networks in France. Among their conclusions, the researchers noted and highlighted the limits of language processing algorithms to identify and moderate hate speech online. The algorithms developed for the study reached 85% accuracy in identifying hate speech. "A high level for this type of research" specifies the document, but while specifying that "artificial intelligence should not be considered a panacea". "There are always gray areas depending on the context," said Iris Boyer, deputy manager of the technology, communications and education division of ISD.

"Satire, irony, humor escapes technology"

“There is this kind of fantasy that an artificial intelligence could filter content 100% automatically, but it does not exist, abounds Nikos Smyrnaios. All algorithms have a margin of error, simply because human language is too complex. Satire, irony, humor escapes technology. The latter also discusses the cases where a surfer will use a hate quote to denounce it. We can also cite the cases where a targeted community decides to appropriate an insult in order to divert it from its original meaning.

"Approaches to moderate hate content or to promote hate should include a human review," advises the ISD study. A system already implemented by the big social networks, but which is not flawless. "Facebook, which has teams of moderators, for example censored the historic photo of the little girl burned with napalm during the Vietnam War, deeming it to be child pornography," recalls Nikos Smyrnaios.

More transparency

"The big social networks have already set up infrastructures for moderation, but completely dissociated from society, judges or a government," notes Iris Boyer. She suggests that online platforms work hand in hand with actors from civil society organizations: "If we want to be sure that web companies take into account the nuances of discourse, for example, associations representing the people targeted by these hate speech participate in the training of moderators ”. “One could also imagine public participation in decisions. This is somewhat what is already done with Wikipedia, where moderation is managed by the contributors, "said Nikos Smyrnaios.

Behind this desire to open moderation to outside actors, the two experts especially hope for more transparency on the moderation criteria. "If private platforms are the only judges, it could become even more opaque to the law, fears Nikos Smyrnaios. We may know the number of contents deleted, but surely not the criteria used to make the decision. "The transparency of content moderation processes and greater oversight from a government regulator [...] are necessary to ensure that moderation is appropriate, precise and provided with sufficient resources," recommends ISD.

Politics

Fighting hate online: Senate unraveling bill

Politics

Fight against hate online: Senators remove flagship measure from text

A time limit of one hour for content reported by the police

"While the law initially required removing illegal content in 24 hours, it now requires platforms to remove in one hour the content that the police will report to him as falling under terrorism or abuse of minors", denounces the association La Quadrature du Net in a press release published this Wednesday. This provision was added by government amendment. "The police will decide alone on terrorist content - without the supervision of a judge," worries La Quadrature.

  • Terrorism
  • Social media
  • By the Web
  • homophobia
  • Gafa
  • Internet
  • Bullying