Twitter illustration -

SOPA Images / SIPA

  • Twitter suspended several accounts of feminist activists who re-shared the question "How do we get men to stop raping?"

    "

  • The social network admitted a mistake, pointing to the increased use of machine learning and automation.

  • Is artificial intelligence so bad at differentiating hateful content from other content?

Is Twitter's algorithm really the only one responsible?

Last week, the account of the feminist and anti-racist activist Mélusine as well as several other accounts were suspended for a simple question: "How do we make sure that men stop raping".

After an outcry on the social network, Twitter eventually recognized an error, questioning its algorithm.

“We have increased our use of machine learning and automation to take more action on potentially abusive and manipulative content.

We want to be clear: although we strive to ensure the consistency of our systems, it may happen that the context usually provided by our teams is lacking, leading us to make mistakes, ”explained Twitter France Thursday at

20 Minutes

.

The human behind algorithms

It is true that artificial intelligence still has some progress to make in spotting hateful posts.

She does not discern irony (sometimes human either, to tell the truth) and she frankly struggles to understand the context of certain exchanges.

As a result, several tweets by LGBT activists have recently been censored for containing the words "dykes" or "queers," used in a reappropriation of the stigma.

Are algorithms that bad?

“Artificial intelligence will not be 100%, admits Isabelle Collet, researcher on gender issues in tech and on equality pedagogy at the University of Geneva.

When we do context analysis, the AI ​​does not understand the meaning of the sentence, but it can make analogies with sentences considered to be hateful ”.

By analogy, she can say: There is a 90% chance that this tweet will be hateful because it looks 90% like tweets that have been certified as hateful.

In the field of artificial intelligence, databases are the sinews of war.

The more data there is annotated, labeled, classified, the better the algorithm is.

However, this base is made up of humans who will decide what constitutes hateful or insulting content.

"The human will place the cursor", underlines the researcher.

A human intelligence has decided, at the base, what constitutes an insult or not and this choice is a matter of subjectivity.

In short, artificial intelligence only follows what it has been taught.

"How to rape a woman"

In the case of the suspension of feminist accounts, pointing the limits of the algorithm can question.

In May this year, a teenager posted a thread on Twitter titled "How to Rape a Woman."

“For about fifteen tweets, he reiterated these questions and his account was not suspended,” recalls Isabelle Collet.

I wonder about this pretext of automation.

The algorithm should have triggered on 'how to rape a woman' if it triggered on 'how to make men stop raping' ”.

Likewise, @pastadaronne had already used the word "dyke" in posts prior to the one that was hidden.

Some believe that these random account suspensions could be the result of mass reporting.

Journalist Titiou Lecoq makes this link in Slate about feminist accounts.

"It is difficult to see which keywords triggered the suspension," she wrote.

Especially since some accounts which took it over were suspended and not others.

(…) Or, another hypothesis, masculinist activists have come together to report the tweets (especially since it is common in these circles to have different accounts, which allows each individual to increase their capacity for nuisance) ” .

Another automation problem

As coordinated campaigns and raids increase online, could Twitter not identify which accounts are behind these reports?

This is indeed an automation problem, but not the one we think.

“Take the example of the CSA, from a certain number of complaints, it will look at what is happening.

Conversely, Twitter cuts, notes Isabelle Collet.

It's a shame because a large number of reports come from problematic groups ”.

If Twitter has taken on tens of thousands of conspiratorial accounts, why doesn't it take on the issue of the highly organized, web-crusading masculinists?

By the Web

Coronavirus: Could social networks alert us to the risks of Covid-19 before its identification?

Elections

Trump banned from Twitter: Election fake news has fallen 73% since then

  • Feminism

  • Artificial intelligence

  • By the Web

  • Twitter

  • Censorship