The article published Thursday "greatly exaggerates the magnitude of the bug which in the end had no significant or long-term impact on the problematic content", reacted Joe Osborne, a spokesperson for Meta, the parent company. from Facebook.

According to The Verge, engineers from the group have written an internal report citing a "massive ranking failure" of content.

They noticed in October that the News Feed's algorithms were broadcasting more widely certain content identified as questionable by external media, members of "Third party fact-checking", the verification program developed by Facebook.

“Unable to find the cause of the problem, engineers saw the wave drop a few weeks later before it resurfaced repeatedly until this ranking issue was fixed on March 11,” the article details. .

But according to Joe Osborne, the incident only affected a "very small number of views".

“The overwhelming majority of News Feed content cannot be downgraded,” he explained, adding that other mechanisms designed to avoid exposing users to so-called “harmful” content had remained in place. - such as "other demotions, fact-checking warnings and withdrawals".

AFP participates in more than 80 countries and 24 languages ​​in "Third party fact-checking".

With this program, which started in December 2016, Facebook pays more than 80 media around the world, generalist or specialized, for the use of their "fact-checks" on its platform, on WhatsApp and on Instagram.

If something is diagnosed as false or misleading by one of these outlets, users are less likely to see it appear in their News Feed.

And if they see it or try to share it, the platform suggests that they read the verification article.

Those who had already shared the information receive a notification redirecting to the article.

There are no deletions of posts.

The participating media are totally free in the choice and treatment of their subjects.

© 2022 AFP