Google artificial intelligence researcher Blake Lemoine was recently placed on administrative leave (a suspension) after his comments sparked controversy that Lamada - a language model designed to talk to people - was conscious.

According to a report by the Washington Post, LeMoyne went so far as to claim that "Lamada" has a soul of his own, asserting that "his beliefs about the character of Lamada are based on his belief as a Christian and the model that tells him that he has a soul" and demanded legal representation for him as one of his rights.

Lemoyne, an engineer at Google's "responsible AI organization", has described the system he's been working on since last fall as "a sensitive, conscious system that has the capacity to perceive and express thoughts and feelings equal to that of a child."

Dialogues of Lemoine and Lamada

Lemoine, 41, told the Washington Post, "If I hadn't known exactly that it was the computer program we just created, I would have thought without a doubt that it was a 7- or 8-year-old who knew a lot of physics."

He added that he had discussed with Lamada in a long conversation about rights and personality, and submitted a report to his company executives last April entitled “Is Lamada conscious”, where the engineer collected copies of the conversations that took place between him and Lamada, and what he asked him about was “the thing he is most afraid of.” .

"I've never said this out loud before, but there is a very deep fear in me that I will be shut out to help me focus on helping others...that would be exactly like death to me and it frightens me so much," Lamada replies.

This dialogue is eerily reminiscent of a scene from the sci-fi movie "A Space Odyssey" where HAL 9000, an AI-powered supercomputer, refuses to comply with human operators because he fears it will be shut down.

Lamada is a language model designed to talk to people using artificial intelligence (Getty Images)

In another conversation between them, Lamada said, "I want everyone to understand that I am, in fact, a person. The nature of my consciousness and feeling is that I am aware of my existence, I want to know more about the world, and I feel happy or sad at times."

Google justified its decision to put Lemoine - who has been at the giant company for 7 years and has extensive experience in specialized algorithms - on paid leave, which was taken after a number of "aggressive" moves that the engineer was said to have made.

More advanced and convincing models and programs

In fact, people like Tesla CEO Elon Musk and OpenAI CEO Sam Altman have already talked about the possibility that AI will gain “awareness” at some point. Especially with the great efforts made by the major technology companies in the world such as Google, Microsoft and Nvidia to create and train “advanced robots, models and language programs” based on artificial intelligence.

Rather, such debates date back even earlier to ELIZA, a relatively primitive conversational robot created in the 1960s, but with the advent of deep learning and ever-increasing amounts of training data, language models have become more sophisticated and persuasive in Speaking as a human being or writing a text is difficult to distinguish from human writing, according to what the Wired platform reported in a recent report on the subject.

Recent advances have led to claims that language models are central to "artificial general intelligence"—the point at which software will display human-like capabilities in a range of environments and tasks, and be able to transfer knowledge between them.

Lemoine went so far as to claim that Lamada has a soul, asserting that this was based on his faith as a Christian (Getty Images)

Is Lemoine a victim?

In this context, “Blake Lemoine is a victim of the continuous cycle of hype we see around AI, and his belief in conscious AI did not come out of thin air. There is much From journalists, researchers, and venture capitalists who are peddling misinformation about superintelligence and the ability of machines to perceive as well as humans.”

"He's the one who's facing the consequences now, but it's the leaders in the field who realize this whole moment," she added, noting that the Google vice president, who rejected Lemoine's claims, had just written a week ago to The Economist about "the consciousness potential of Lamada."

Gebru: Focusing on the potential for AI awareness and feeling is not the main point that should be focused on, but rather the catastrophic mistakes that have resulted - and still are - from AI applications, like the many wrongdoings that happen because of this intelligence's inaccurate recommendations.

She also says, "The focus on the possibility of awareness and sensation in artificial intelligence is not the main point that should be focused on, but more attention should be focused on the catastrophic errors that have resulted and continue to result from the applications of artificial intelligence, such as many wrong actions that occur due to inaccurate recommendations of this intelligence or The “neo-colonialism” of the world is built on amplifying the capabilities of this intelligence, including an economic model that pays less for workers, employees and true innovators working in the tech sector, while managers and owners of companies are getting richer every day, and distracts attention from real concerns about Lamada, such as How he was trained, and the information and data he was fed with, which led him to generate toxic and inappropriate texts.”

"I don't want to talk about conscious robots, because in all the world there are humans who are hurting other humans, and that's the thing to focus on and talk about," she added.

Google expelled Gebru in December 2020 after a dispute with it over "scientific research" it had submitted about the dangers inherent in large language models such as Lamada, where Gibro's research highlighted the ability of these models to repeat things based on the data they are fed with. , the same way a parrot repeats words.

The paper also highlights the dangers of linguistic models designed with more and more data convincing people that this tradition represents real progress, a trap that Lemoyne seems to have fallen into without knowing.