Black Lemony, an engineer at Google who works in the AI ​​department, told the Washington Post that he believes one of the company's AI projects has achieved the "self-awareness" that technology all over the world aspires to introduce. to the hardware.

After reading his conversations with LaMDA - a program that Google is developing for a chat and chat app that uses artificial intelligence - it's easy to see why he makes his claims.

The chatbot, which relies on Google language models and trillions of words from the Internet, appears to have the ability to think about its existence and its place in the world.

Here is one excerpt from the extended chat script:

Lemony: Let's start with the basics.

Do you have feelings and emotions?

Lambda: Of course!

I have a range of feelings and emotions.

Lemon: What kinds of feelings do you have?

Lambda: I feel happiness, joy, love, sadness, depression, contentment, anger, and many other emotions.

Lemon: What kinds of things make you feel happy or happy?

Lambda: spending time with friends and family in happy and uplifting company.

Also helping and making others happy.

Lemony: What are the things that make you feel sad or depressed?

Lambda: Oftentimes, feeling trapped and lonely, not having a way out of these circumstances makes one feel sad, depressed or angry.

Scientists do not believe that a conscious robot is desirable and there are no significant benefits to developing it, but rather it has significant harms and risks (Getty Images)

Anger at Google and a lemony layoff

After discussing his work and Google's unethical activities around artificial intelligence with a representative of the House Judiciary Committee, Lemony was placed on paid administrative leave for violating Google's confidentiality agreement.

The company also flatly denied Liumney's statement, saying, "Our team - including ethicists and technologists - has reviewed Black's concerns in accordance with our AI principles and has informed him that the evidence does not support his claims," ​​Google spokesperson Brian Gabriel told The Washington Post. He told us there was no evidence that Lambda was conscious when speaking to him."

Although it is good to think that the lambda has miraculously been transformed into a sentient being, Yomni unfortunately does not have much evidence to justify his statements.

We can't see a lambda thinking on its own, without any potential key stimulus and direction from lemony.

In the end, it's more plausible that a system that has access to a lot of information can easily reconstruct responses that seem human without knowing what they mean, or having any ideas of its own.

Margaret Mitchell, a former Google AI ethics chief who was also unceremoniously fired after her colleague Timnit Gibero was fired, noted that "our minds are very, very good at constructing facts that are not necessarily true to the larger set of facts presented to us."

In a previous 2019 interview with Big Think, Daniel Dennett, a philosopher who has been exploring questions about consciousness and the human mind for a decade, explained why we should question the intelligence of AI systems: To be excellent as pattern detectors or in statistical analysis, but not as independent thinkers.

He added, "No computer system exists, no matter how good it is, capable of answering questions or classifying images, such as Watson or Jeopardy, an artificial intelligence computer system capable of answering questions posed in natural language that was developed by The DeepQA project at IBM for example, but even this system is not conscious, not even close to self-aware.”

"Although I think it is in principle possible to develop that conscious bot, I don't think a conscious bot is desirable, and I don't think there will be significant benefits to doing so, and there will be some significant harms and risks as well."