• Facebook deactivated the recommendation tool as soon as the teams learned of the error in its algorithm, which mistook black people for "primates".

  • How could the algorithm of the social network, which the Artificial Intelligence Research Laboratory (FAIR) is so famous for, have made such a mistake?

  • Florence Sèdes, professor of computer science at the University of Toulouse and researcher at IRIT-CNRS, gives ideas for explaining this phenomenon.

    Unfortunately, Facebook did not provide more information about this error.

How can such a situation arise in 2021? A Facebook recommendation algorithm suggested that people see more "primate videos" under a Daily Mail video that showed black people. Revealed by

The New York Times on

Friday, this recommendation of an artificial intelligence from Facebook causes all the more discomfort as the video does not show any monkeys. Over a year old, it's titled "White Man Calls Cops Against Black Men at Marina". How can the thumbs-up social network known worldwide for its artificial intelligence research laboratory (FAIR) and with more than two billion users, can it see its algorithm make such a mistake?

Facial recognition algorithms "are mainly trained on photos of white men: errors will therefore be more numerous for women and blacks", writes Juliette Duquesne in

Humans at the risk of artificial intelligence

(Presse du Châtelet ) co-written with Pierre Rabhi.

The error rate is 10 to 100 times higher for people from Africa or East Asia, according to a study by the National Institute of standards and technology published in 2019 cited in the book.

A poor quality dataset?

It must be said that facial recognition is quite basic: it works on a geometric model of the face. "You put a median on the face which gives you the alignment of the eyes, the nose, the mouth," describes Florence Sèdes, professor of computer science at the University of Toulouse and researcher at IRIT-CNRS, member of Femmes et Science. As the algorithms are mostly trained on Caucasian-type faces, they are not very good on the faces of black people. Just like in China, technology is less effective on the faces of Westerners.

“We need quality data, ethical datasets, which will make it possible to represent minorities in the same way as others. They make it possible to ensure that there are as many women, as many men, as many people with disabilities… ”, explains Florence Sèdes. The more we have fed the algorithm with different faces, the less likely it will be wrong. But data quality comes at a cost, of course.

Errors of this type are common in image processing.

A cookie mistaken for a chihuahua, a snow tiger mistaken for a rock.

"It's fairly classic, we know that it does not work if the database is not representative," observes the specialist in computer systems and metadata management.

That said, we could imagine that Facebook had the financial and technological means to acquire quality datasets.

Also, in this story, it is amazing that the tool could have been brought online without a patch.

A lack of tests on the standard?

“When we do IT development, we do a lot of testing,” explains Florence Sèdes. It is comparable to the crash tests of cars or airplanes. This error should have been tested by Facebook internally. They should have encountered this problem upstream and corrected it before it appeared online ”. When the Internet user comes across a cookie instead of a dog, the error is trivial. Confusing a black person for a monkey, “it's very shocking. And this result proves that at a given moment the facial recognition criteria are not good, ”she concludes.

Did Mark Zuckerberg's Social Network Train His Algorithm With Poor Database? Hasn't he sufficiently tested his standard? Contacted by

20 Minutes

, Facebook does not give an explanation. According to the official statement transmitted, “this is clearly an unacceptable error (…) and the referral function was deactivated as soon as we became aware of the problem in order to investigate the cause and prevent it from happening. reproduces itself ".

This is believed to be an algorithmic error which did not reflect the content of the Daily Mail publication.

“Although we have made improvements to our AI, we know that it is not perfect and we still have some progress to make,” concludes the communication.

We will not know more behind the scenes of this unfortunate recommendation.

Culture

How is AI making us all paranoid?

Culture

Facial recognition, "social credit" ... China is already in the future (and that does not make you dream)

  • Facebook

  • Racism

  • Culture

  • Artificial intelligence