• Technology Has Google's Artificial Intelligence Become Self-Aware?

A Google worker assures that his artificial intelligence (AI) program is capable of having feelings and has been suspended from employment and salary by the company, considering that he

has violated the firm's confidentiality policy

, according to

The New York Times

.

This is senior engineer

Blake Lemoine

, who on June 11 made public the transcript of a conversation he had with Google's artificial intelligence system

"Language Model for Dialog Applications"

(LaMDA) under the title "Does LaMDA have feelings?"

At one point in the conversation, LaMDA stated that he sometimes experiences

"new feelings"

that he can't explain "perfectly" with human language.

Asked by Lemoine to describe one such feeling, LaMDA replied,

"I feel like I'm falling into an unknown future that carries great danger

," a phrase Lemoine underscored when posting the dialogue.

According to the newspaper, on the eve of being suspended, Lemoine delivered documents to the office of a United States senator in which he stressed that he had evidence that

Google and its technology practice religious discrimination

.

Conversation by imitation

The company claims that its systems mimic conversational exchanges and can talk about different topics, but have no conscience.

"Our team, including ethicists and technologists, have reviewed Blake's concerns based on our AI principles and I have advised him that

the evidence does not support his claims

," ​​Google spokesman Brian Gabriel was quoted as saying by the Newspaper.

Google maintains that hundreds of its researchers and engineers have talked to LaMDA, which is an internal tool, and reached a different conclusion than Lemoine.

Most experts also believe that the industry is

a long way from computer sensitivity

.

Conforms to The Trust Project criteria

Know more

  • Google