Illustration of the social network Twitter -

Clément Follain / 20 Minutes

  • Over the weekend, Twitter's algorithm responsible for generating the preview of photos posted on the platform was accused of including racist bias.

  • Users have noticed that the system almost constantly favors white people by obscuring those with black skin.

    A hypothesis refuted by a study carried out in the wake of an American researcher.

  • Twitter said it did not find any race or gender bias when developing this algorithm.

    The social network concedes, however, that it still has "some analytical work to do".

New controversy around the racist biases that artificial intelligence systems can involve.

This time, it is the social network Twitter that is pinned because of the algorithm used to automatically crop the photos published on the platform.

From the United States to Brazil, via France, several Internet users have denounced the “racism” operated by this system.

Over the past few days, many users have thus carried out experiments to verify whether biases related to skin color really exist.

According to them, when a photo brings together a black person and a second white person, the latter would, in the majority of cases, be put forward at the expense of the one whose skin is black.

On Sunday, Tony Arcieri, an American engineer, for example tried different combinations with the photographs of the former President of the United States Barack Obama and Mitch McConnell, senator (white) of Kentucky and leader of the Republican majority within the upper house of the US Congress.

No matter the order of the shots and the exchange of the color of their ties (blue and red), it is always Senator McConnell's face that serves as a preview.

Only the inversion of the colors, which will therefore obscure the skin tones, will allow Barack Obama to be highlighted.

Trying a horrible experiment ...



Which will the Twitter algorithm pick: Mitch McConnell or Barack Obama?

pic.twitter.com/bR1GRyCkia

- Tony “Abolish (Pol) ICE” Arcieri 🦀 (@bascule) September 19, 2020

A problem of contrast?

It all started on Saturday, when an American doctoral student wishing to denounce a racist bias of the Zoom video calling platform posted on Twitter a screenshot of his videoconference with a black colleague.

He then realizes that on the Twitter mobile app, the algorithm automatically chooses his face to create the preview, even when he rotates the image.

“Any idea why Twitter decided to only show the right side of the photo on the mobile version?

», He asks.

Geez ... any guesses why @Twitter defaulted to show only the right side of the picture on mobile?

pic.twitter.com/UYL7N3XG9k

- Colin Madland (@colinmadland) September 19, 2020

The hypothesis expressed has, however, been called into question by other experiments.

In particular, Vinay Prabhu, researcher at Carnegie Mellon University, in Pittsburgh, says he has conducted a “systematic” study to verify the veracity of these biases.

He thus created a program whose results contradict the theory of the racist character of the algorithm.

He thus relied on a grid of images made up of standardized shots of black men and white men put side by side and separated by a blank image.

Of the 92 images, Twitter highlighted the black model 52 times, compared to 40 for the white model.

(Results update)


White-to-Black ratio: 40:52 (92 images)


Code used: https://t.co/qkd9WpTxbK


Final annotation: https://t.co/OviLl80Eye


(I've created @cropping_bias to run the complete the experiment. Waiting for @Twitter to approve Dev credentials) pic.twitter.com/qN0APvUY5f

- Vinay Prabhu (@vinayprabhu) September 20, 2020

Faced with the controversy sparked over the weekend, some Twitter executives reacted on Sunday.

Dantley Davis, head of the Twitter design pole, considers, for example, that the photo of Colin Madland was highlighted by the algorithm because of the contrast between his dark beard and his fairly pale skin.

He posted the same screenshot while lightening Madland's beard in order to prove that his caller was finally highlighted.

Based on some experiments I tried, I think @ colinmadland's facial hair is affecting the model because of the contrast with his skin.

I removed his facial hair and the Black man shows in the preview for me.

Our team did test for racial bias before shipping the model.

pic.twitter.com/Gk33NQlGgB

- Dantley 🔥✊🏾💙 (@dantley) September 19, 2020

"We still have analytical work to do"

According to Liz Kelley, of the communications department, Twitter had "not detected any bias related to race or gender" during tests carried out upstream.

In her tweet, the latter concedes: “It is obvious that we still have some analytical work to do.

”For his part, Zehan Wang, an engineer of the platform said that“ the algorithm does not use facial recognition ”, before confirming that no“ significant bias ”had been found at the time.

thanks to everyone who raised this.

we tested for bias before shipping the model and didn't find evidence of racial or gender bias in our testing, but it's clear that we've got more analysis to do.

we'll open source our work so others can review and replicate.

https://t.co/E6sZV3xboH

- liz kelley (@lizkelley) September 20, 2020

This is not the first time that such an anomaly has been pointed out in an algorithm set up by a giant in Silicon Valley.

In 2015, Google had to remove its image application after it mistook African Americans for a gorilla.

More recently, a study by the prestigious Massachusetts Institute of Technology (MIT) in 2018 showed huge disparities in the success rates of facial recognition software from Microsoft or IBM between the pictures of white men or those of women and people with darker skin tones.

For Winston Maxwell, director of law and digital studies at Télécom-Paris,

it is important to exclude "the idea

that racist or sexist biases are deliberately included in an algorithm.

According to this specialist in data regulation and AI, the dysfunction can come from what is called statistical bias.

"To educate an algorithm, if we use a database that has many more images of dogs than images of cats, the machine will in fact be more efficient when it encounters a photo of a dog" , he explains.

A need for transparency and high standards

According to Winston Maxwell, it is essential not to confuse algorithms, such as Twitter's, and facial recognition software, which is much more advanced and difficult to train.

“The use case of Twitter is certainly embarrassing, it gives the social network a very bad brand image, but the possible repercussions are less compared to the use of facial recognition, which would lead to the arrest by mistake of 'an African American.

"

For Winston Maxwell, “to say that an algorithm does not have any bias is generally wrong”.

While the demand for transparency increases, this former lawyer considers that the engineers at the base of the software will have to be "more demanding on the tests" and warn of the biases which remain "as would do a notice, for example".

World

United States: Big failure of facial recognition, African-American wrongly arrested

By the Web

US Presidential: The Trump team increasingly defies the limits of social networks

  • Twitter

  • Racism

  • Discrimination

  • Social networks

  • By the Web