The case shook the artificial intelligence community in early June: Blake Lemoine, a Google engineer, told the Washington Post that the LaMDA language recognition model was probably self-aware.

Very quickly, experts in the field – and Google itself – spoke out against this assumption.

LaMDA is a system created to mimic conversations as real as possible, but that doesn't mean it understands what it's saying.

On the contrary, argue several scientists, constantly raising the debate about the consciousness of AI diverts attention from the questions that these technologies pose in a more urgent way.

The old obsession with the intelligence of robots… Marketing argument?

The hypothesis of an awareness of our technologies is nothing new – it has been lingering in our imaginations since Mary Shelley's

Frankenstein

and the growing success of science fiction.

Imitating human reasoning is also the basis of the Turing test, an experiment aimed at estimating whether a machine manages to pass itself off as a man to an outside observer.

One of the fathers of modern computing, John von Neumann, for his part laid the foundations of modern computer architectures by modeling them on the functioning of the brain.

"Even today, many people fund research and work in this direction", points out the professor of artificial intelligence at LIMSI/CNRS Laurence Devillers.

And to quote Elon Musk, founder of Open AI, Yann LeCun, head of AI research at Meta, when he evokes the possibility that certain machines feel emotions, Blaise Agüera y Arcas, vice-president at Google, when he describes LaMDA as an artificial cortex… “That an engineer declares LaMDA conscious has a marketing interest, explains the researcher.

This puts Google in a competitive universe.

»

When empathy deceives us

In fact, LaMDA is neither the first robot able to arouse empathy, nor the first algorithmic model likely to produce a credible written conversation.

In the 1960s, the computer scientist Joseph Weizenbaum, for example, built Eliza, a program simulating the responses of a psychotherapist.

The machine worked so well that people leaked intimate details to it.

We now call the “Eliza effect” the human propensity to attribute more faculties to a technical system than it can possess.

Closer to LaMDA, the broad GPT-3 language recognition model, available since 2020, is also capable of credibly impersonating a journalist, a squirrel or a resurrected William Shakespeare.

But that users, experts or not, can take these results for conscience, this is what frustrates a growing number of scientists.

It is an abuse of our faculties of empathy, believes linguist Emily Bender, the very ones that make us project a semblance of humanity into inanimate objects.

LaMDA, recalls Laurence Devillers, is "fundamentally inhuman": the model has been trained on 1.560 billion words, it has neither body nor history, it produces its answers according to probability calculations...

Artificial intelligence is a social justice issue

Shortly before the Lemoine affair, the doctoral student in philosophy Giada Pistilli declared that she would no longer speak about the possible consciousness of machines: this diverts attention from the ethical and social issues that already exist.

In this, she is toeing the line of Timnit Gebru and Margaret Mitchell, two AI ethics research pundits fired by Google…for pointing out the social and environmental risks posed by broad language models.

“It's a question of power, analyzes Raziye Buse Çetin, independent researcher in AI policy.

Do we highlight and finance the quest for a machine that we dream of making conscious, or rather the attempts to correct the social, sexist or racist biases of the algorithms already present in our daily lives?

»

The ethical problems of the algorithms that surround us on a daily basis are innumerable: on what data are they trained?

How do we correct their mistakes?

What happens to the texts that users send to chatbots built using models similar to LaMDA?

In the United States, an association for listening to suicidal people used the responses received from these people in great vulnerability to train commercial technologies.

“Is this acceptable?

We need to think about how data is used today, about the value of our consent in the face of algorithms that we sometimes don't even suspect are present, to look at their aggregate effects since algorithms are already widely used in education, recruitment, credit scores…”

Regulation and education

The subject of AI awareness prevents further discussions "on the technical limitations of these technologies, the discrimination they cause, their effects on the environment, the biases present in the data", list Tiphaine Viard, lecturer at Telecom Paris.

Behind the scenes, these debates have been agitating scientific and legislative circles for several years now, because, according to the researcher, “the issues are similar to what happened for social networks.

" The big tech companies have long said that they did not need to be regulated, that they would manage: " Result, fifteen years later, we say to ourselves that we need a look politician and citizen.

»

What framework, then, to prevent algorithms from harming society?

The explicability and transparency of the models are two of the axes discussed, in particular, to allow European regulation of AI.

“And these are good leads, continues Tiphaine Viard, but what should it look like?

What is a good explanation?

What are the possible points of recourse if it shows that there has been discrimination?

There is, for the moment, no fixed answer.

The other major subject, emphasizes Laurence Devillers, is that of education.

"You have to train very early on in the challenges posed by these socio-technical objects", teach the code, make people understand how algorithms work, help build skills... Otherwise, faced with machines built to imitate humans, "users are at risk of being manipulated.

Education, continues the computer scientist, will be the best way to allow “everyone to think about how to adapt to these cutting-edge technologies, to the frictions that we want to implement there, to their brakes, to their acceptability.

“And to push for the construction of an ethical ecosystem, “where manufacturers are not responsible for their own regulation.

»

Culture

European Union: Can artificial intelligence preserve privacy and human rights?

high tech

No, Google's artificial intelligence is not conscious, as a company engineer claims

Culture

European Union: Can artificial intelligence preserve privacy and human rights?

  • Artificial intelligence

  • Discrimination