Google has developed the first thinking artificial intelligence (AI) that is self-aware and knows emotions, according to one of its engineers.

The AI, called LaMDA, is claimed to be "like a child of seven to eight years".

That LaMDA would have achieved some form of self-awareness is denied, however, by Google, which has chosen to temporarily shut down the developer, Blake Lemoine, who himself posted a transcript of a "conversation" he had with LaMDA.

Reasons about its existence

Google makes no secret of the fact that, like many other tech companies, it works with machine learning and artificial neural networks to develop AI.

Roughly simplified, LaMDA is a super-advanced chat robot, a way to get something through machine learning and huge amounts of data that can lead a conversation instead of just answering specific questions.

In the transcript that Lemoine shared, based on several different "conversations" with LaMDA, it appears that the AI ​​is in any case textually reasoning about its own existence.

The engineer believes that LaMDA should be given the status of an employee of Google and no longer just be "a tool".

When asked if LaMDA sees itself as a person, like Lemoine, the AI ​​answers:

"Yes, that's the idea."

Javascript is disabled

Javascript must be turned on to play video

Read more about browser support

The expert on when we can see a self-conscious AI: "Some things have to happen - but we do not know what they are" Photo: Storyblocks / SVT

Skepticism among experts

It has long been known that artificial intelligence can mimic human behavior, but even if the printing of LaMDA raises questions, it is far too early to talk about AI with human-like consciousness, Google said in a statement to the Washington Post:

"Our team, including ethicists and technologists, has reviewed Blake Lemoine's concerns in accordance with our AI principles and informed him that the evidence does not support his claims.

There is no evidence that LaMDA has consciousness (and lots of evidence to the contrary). "

Even outside the company, there is great skepticism among experts when it comes to Lemoine's claims.

One of the critics is Erik Brynjolfsson, a professor at Stanford University's Institute of Artificial Intelligence.

"To claim that artificial intelligences are conscious is the modern equivalent of a dog hearing a voice from a gramophone and believing that his master is trapped in it," he wrote on Twitter.