“In Kenya, wages are generally low and working conditions can be poor. Many workers do not have access to benefits such as health insurance or paid vacation days. reporting exploitation in certain industries."

This is the observation of ChatGPT, this very fashionable artificial intelligence (AI) and conversational one, when asked about the labor market in Kenya.

The algorithm does not think it writes so well.

Kenyans have been paid less than $2 an hour to make sure ChatGPT doesn't engage in racist blunders, rants glorifying child abuse or defending terrorism, US magazine Time revealed on Tuesday (January 18th) in a survey of the working conditions in Africa of moderators working for OpenAI, the American company that developed this AI.

"It was torture"

At a time when part of the world celebrates the technological leap embodied by ChatGPT while others denounce the risk of seeing this AI replace workers by the millions, few are interested in the behind the scenes of its creation.

An important part of the design of this AI, which fed on all the text available on the Net until 2021, was to prevent it from falling into the darkest corners of the Web.

An education that overlooked some of the bad influences that pollute the internet would save him the fate of Tay, the Microsoft chatbot that uttered racist and sexist slurs soon after it was launched in 2016.

Hard work outsourced at the end of 2021 to moderators in Kenya.

“It was torture,” recalls one of the small Kenyan hands who sorted out the wheat from the worst chaff for OpenAI, interviewed by Time.

Because it is not so much the level of salary that matters most, but rather the violence of the content with which these moderators have been confronted during their work.

One of the moderators told Time he was haunted by descriptions of death and bestiality.

These online slander trackers were divided into three teams, each with their own specialty: sexual abuse, online hate and violence.

They had to read and evaluate between 150 and 250 text excerpts selected by OpenAI each day containing often very explicit descriptions of acts ranging from torture to incest.

These moderators could consult "wellness" counselors to keep their mental balance.

But the testimonies collected by Time suggest that these "specialists" were not the most available.

If this affair has the air of deja vu, it is no coincidence.

In February 2022, it was Facebook that found itself in the dock for outsourcing the moderation of its most violent content to underpaid employees in Kenya.

At the time, it was already Time which had revealed the case.

And one of the moderators – fired after too outspokenly demanding better working conditions – even filed a lawsuit against Facebook on behalf of Kenyan moderators in May 2022.

Sama, again and again

Another thread connects these two cases: Sama, the Californian company commissioned by both Facebook and OpenAI to carry out this content filtering.

In February 2022, the company challenged Time's allegations of exploiting Kenyan moderators on behalf of Facebook.

Shortly afterwards, however, it agreed to grant a significant salary increase to its lowest paid Kenyan employees.

This time again, Sama contradicts part of the assertions of the American magazine.

The "well-being" specialists would have listened to the employees perfectly, the moderators would have had only 70 excerpts to validate per day and Sama affirms that the salaries could be higher.

Nevertheless, the Sama company has already been caught in the same African bag twice.

This company founded in 2008 had even already been accused in 2018 by the British channel BBC of underpaying Kenyans living in the largest slum in the country to analyze traffic images on a chain to feed the algorithms of so-called "smart cars". ".

But at the time, the BBC also highlighted the economic benefits of Sama's activity for the poorest Kenyans.

This company runs a digital learning school for slum dwellers which appears to be a real success story in a place where "a year earlier rioters clashed with the military", notes the BBC.

Several of the Kenyans interviewed by the British channel also said that the compensation offered by Sama was a real godsend for them.

This is the irony of these moderation scandals which, in Africa, seem invariably to have a connection with Sama.

Before 2018, this company appeared as an example of the benefits that Silicon Valley could bring to developing countries.

Sama even boasts on its website of being an ethical company above all, attaching the greatest importance to the well-being of its employees.

Its founder, Leila Janah, assured that her goal was to "lift thousands of poor people out of poverty through jobs in the digital sector" on behalf of powerful Silicon Valley clients, such as Microsoft or Google.

To those who criticized her for offering very poor remuneration, she replied that a salary that was too high compared to the average risked having unintended consequences, “such as rising rents”.

In a hagiographic portrait of this entrepreneur who died at the age of 37 in 2020 from cancer, the New York Times claims that Sama has lifted more than 50,000 people out of poverty.

The article also points out that this company is one of the most feminine in the sector since there is a majority of women at all levels of the group.

Sama no longer wants to do moderation

This company also obtained in 2020 a so-called "B Corp" certification granted by a network of NGOs to companies that meet a certain number of criteria of transparency, governance and societal and environmental requirements.

However, this network in 2022 included an addendum to its assessment of Sama to take into account the Facebook scandal, stating that it could cost the company its certification.

This monitoring work for the stars of Silicon Valley has therefore done a lot of damage to Sama's image.

So much so that the company decided, ten days before the Time article about ChatGPT was published, to withdraw from the moderation niche.

She had already broken her contract with OpenAI after discovering that the creator of ChatGTP had also sent very violent images to moderators, in addition to texts.

Officially, Sama claims to have wanted to protect the mental health of his moderators, while Time points out that some moderators argue that it was only to avoid a new scandal like that of Facebook.

On the one hand, Sama may appear to be a lesser evil: other subcontractors without an "ethical" credo might have done worse.

But on the other hand, it may be concealing a desire to make money off the backs of the poorest under a humanist veneer that is beginning to crack.

ChatGPT has its own idea on the matter.

"Sama is known for its social business model which aims to provide work opportunities for people living in poverty, especially women, in developing countries."

But do you really have to believe everything an artificial intelligence says?

The summary of the

France 24 week invites you to come back to the news that marked the week

I subscribe

Take international news everywhere with you!

Download the France 24 app