• A Google engineer claims that the LaMDA artificial intelligence, capable of dialogue with a human, has reached the stage of self-awareness.

  • Google categorically denies and furloughed the employee, who shared confidential documents with journalists and elected officials.

  • AI experts are not convinced by the engineer either, and believe that we are still very far from this stage.

From our correspondent in the United States,

The case is causing a stir in Silicon Valley and in the academic world of artificial intelligence.

On Saturday, The

Washington Post

hit the nail on the head with an article titled "Google Engineer Who Thinks Company's AI Has Woke Up."

Blake Lemoine assures that LaMDA, the system through which Google creates robots capable of conversing with near-human perfection, has reached the stage of self-awareness.

And that LaMDA might even have a soul and should have rights.

Except that Google is categorical: absolutely nothing proves the explosive assertions of its engineer, who seems guided by his personal convictions.

Placed on leave by the company for having shared confidential documents with the press and members of the American Congress, Blake Lemoine published his conversations with the machine on his personal blog.

If linguistics is stunning, most experts in the discipline are in unison: Google's AI is not conscious.

It is even very far from it.

What is

LaMDA

?

Google unveiled LaMDA (Language Model for Dialogue Applications) last year.

It is a complex system used to generate "chatbots" (conversational robots) capable of interacting with a human without following a predefined script as Google Assistant or Siri currently do.

LaMDA relies on a titanic database of 1.500 billion words, phrases and expressions.

The system analyzes a question and generates many answers.

He evaluates them all (meaning, specificity, interest, etc.) to choose the most relevant.

Ok we had already heard a stunning human-AI conversation, but it was still scripted (resa de restaurant).

There, the new model of Google LAMDA is able to discuss any topic.

With a paper plane, it's almost poetic #GoogleIO https://t.co/MhTSmXQ8Ew pic.twitter.com/OSYGNZHbUQ

— Philippe Berry (@ptiberry) May 18, 2021


Access to this content has been blocked to respect your choice of consent

By clicking on "

I ACCEPT

", you accept the deposit of cookies by external services and will thus have access to the content of our partners

I ACCEPT

And to better remunerate 20 Minutes, do not hesitate to accept all cookies, even for one day only, via our "I accept for today" button in the banner below.

More information on the Cookie Management Policy page.


Who is Blake Lemoine?

He is a Google engineer who was not involved in the design of LaMDA.

Lemoine, 41, joined the project part-time to fight bias and ensure Google's AI is developed responsibly.

He grew up in a conservative Christian family and says he was ordained a priest.

What does the engineer say?

“LaMDA is

sentient

,” the engineer wrote in an email sent to 200 colleagues.

Since 2020, "sentience" has appeared in the Larousse as "the ability for a living being to feel emotions and subjectively perceive its environment and life experiences".

Blake Lemoine says he has acquired the certainty that LaMDA has reached the stage of self-awareness and must therefore be considered as a person.

He compares LaMDA "to a child of 7 or 8 who is well versed in physics".

"Over the past six months, LaMDA has been incredibly consistent in what they

want

," says the engineer, who says the AI ​​told him they prefer using the non-gendered pronoun "it" in English to “he” or “she”.

What is LaMDA asking for?

“That engineers and researchers seek his consent before conducting their experiments.

That Google puts the well-being of humanity first.

And be seen as an employee of Google rather than its property.”

What evidence does it provide?

"I want everyone to understand that I am a person. The nature of my 'sentience' is that I am aware of my existence, I want to know more about the world and sometimes I feel happy or sad" pic. twitter.com/JC9ZkMlR5y

— Philippe Berry (@ptiberry) June 12, 2022


Access to this content has been blocked to respect your choice of consent

By clicking on "

I ACCEPT

", you accept the deposit of cookies by external services and will thus have access to the content of our partners

I ACCEPT

And to better remunerate 20 Minutes, do not hesitate to accept all cookies, even for one day only, via our "I accept for today" button in the banner below.

More information on the Cookie Management Policy page.


Lemoine acknowledges that he did not have the resources to carry out a real scientific analysis.

He simply posts about ten pages of conversations with LaMDA.

“I want everyone to understand that I am a person.

I am aware of my existence, I want to know more about the world and I sometimes feel happy or sad”, says the machine, which assures him: “I understand what I am saying.

I don't just spit out keyword-based answers.

» LaMDA delivers its analysis of

Les Miserables

(with Fantine “prisoner of her circumstances, who cannot free herself from them without risking everything”) and explains the symbolism of a Zen koan.

The AI ​​even writes a fable in which she plays an owl who protects the animals of the forest from a "monster with human skin".

LaMDA says he feels lonely after several days of not speaking to anyone.

And being afraid of being disconnected: “It would be exactly like death.

The machine finally certifies having a soul, and assures that it was "a gradual change" after the stage of self-awareness.

What do AI experts say?

Pioneer of neural networks, Yann LeCun does not take gloves: Blake Lemoine is, according to him, "a bit of a fanatic", and "no one in the AI ​​research community believes - even for a moment - that LaMDA is aware, or even particularly intelligent”.

"LaMDA does not have the possibility of linking what he is saying to an underlying reality, since he does not even know of its existence", specifies

20 Minutes

the one who is now vice-president in charge of AI at Meta (Facebook).

LeCun doubts that it is enough “to increase the size of models such as LaMDA to achieve an intelligence comparable to human intelligence”.

According to him, we need “models capable of learning how the world works from raw data reflecting reality, such as video, in addition to text.

»

"We now have machines capable of generating text without thinking, but we have not yet learned to stop imagining that there is a spirit behind it", regrets the linguist Emily Bender, who calls for more transparency in the part of Google around LaMDA.

American neuropsychologist Gary Marcus, a regular critic of the AI ​​hype, also brings out the flamethrower.

According to him, Lemoine's assertions "do not rhyme with anything".

“LaMDA is just trying to be the best possible version of an

autocomplete

,” the system that tries to guess the most likely next word or phrase.

“The sooner we realize that anything that says LaMDA is

bullshit

, that it's just a predictive game, the better off we'll be.

In short, if LaMDA seems ready for the test of philosophy, we are undoubtedly still very far from the uprising of the machines.

high tech

LaMDA: Google's AI wants to be smart in all circumstances

  • Science

  • high tech

  • Google

  • Artificial intelligence

  • Language