Louise Bernard with Alexis Patri 10:47 am, November 10, 2021

The Meta group, the new name of the Facebook group, publishes its ritual report on the moderation of content on this platform.

In the midst of the politico-media turmoil, the social network giant welcomes an improvement in moderation on its applications.

A spirit that deserves to be qualified, however.

It's a quarterly report.

But the publication of its latest edition comes amidst political and media turmoil for the Meta group (Facebook, Messenger, Instagram, WhatsApp), especially since the revelations of the Facebook Files in many media and the testimony of whistleblower Frances Haugen .

The former Facebook employee accuses the group of prioritizing profit, rather than fighting online hatred and disinformation.

She is heard on Wednesday by the National Assembly and the French Senate.

>> Find the media newspapers every morning at 9:10 am on Europe 1 as well as in replay and podcast here

Three hate messages out of 10,000

What does this report say about content moderation? According to Meta, the numbers are improving. In three months, 1.8 billion fake accounts would have been deleted. 777 million spam emails and over 22 million hate messages are also said to have been. The company says it succeeds in automatically detecting almost all content related to nudity, harassment or advocacy of terrorism on Instagram and Facebook, without a user report being necessary. 

Meta also claims that users are less exposed to hateful content: only three out of 10,000 messages viewed by users would qualify as such.

A declining figure.

Note that these figures are an average worldwide.

However, the Facebook Files have shown that there are big differences in the moderation of content depending on the country.