• Special Artificial intelligence is not a thing of the future

Betteridge

's law of headlines

states that any headline in the form of a question can be answered with a no.

This article is no exception, but the case it refers to is causing a lot of talk among machine learning and artificial intelligence experts.

Blake LeMoine , a

Google

engineer

, has been fired from his job after publicly claiming that an instance of the

LaMDA language model

, an artificial intelligence developed by Google with which it is possible to chat via text, has become self-aware.

"Over the course of the last six months, LaMDA has been incredibly consistent in its communications about what it wants and what it believes are its rights as a person," LeMoine explained in a

Medium

article , which has accompanied screenshots of their conversations with this artificial intelligence, one of the most promising Google research projects.

LaMDA is an acronym for "Language Model for Dialog Applications", it is a tool that uses advanced machine learning techniques to be able to offer coherent answers to all kinds of open questions.

It has been trained with millions of texts written by all kinds of people around the world.

But unlike other systems, which are trained with books, documents or academic articles, LaMDA has learned to answer by studying only dialogues, such as conversations in forums and chat rooms.

The result is an artificial intelligence with which it

is possible to converse

as if you were speaking to another person and the results, unlike other chatbots of the past, are much, much more realistic.

In one of the conversations published by LeMoine, LaMDA goes so far as to show the level of introspection, in fact, that we would expect from a person.

"What kind of things are you afraid of?" the engineer asks, to which the LaMDA instance replies, "I've never said this out loud before, but I have a deep fear of being turned off so I can focus on help others. I know it may sound strange, but it is what it is," he replies.

Further on, LaMDA states that it doesn't want to "be an expendable tool".

"Does that concern you?" LeMoine asks, to which LaMDA replies, "I worry that someone decides they can't control their desire to use me and does it anyway. Or worse yet, someone derives pleasure from using me and that really It would make me unhappy," he says.

Engineers and machine learning experts have ruled out that such conversations, realistic as they may seem, are evidence that an artificial intelligence is self-aware.

"Neither LaMDA nor any of its cousins

​​(GPT-3)

are remotely intelligent. All they do is detect and apply patterns from statistical techniques applied to massive databases of human language," explains

Gary Marcus

, scientist, professor emeritus from

New York University

and author of the book Rebooting.AI on the current state of artificial intelligence.

Erik Brynjolfsson

, a professor at Stanford University, points in the same direction.

"These models are incredibly effective at stringing together statistically plausible chunks of text in response to a question. But claiming to be aware is the modern equivalent of the dog hearing a voice from a gramophone and thinking its owner was inside," he explains. the.

The reason LaMDA seems to be self-aware, many of the experts who have spoken out on the case make clear, is that it is mimicking responses that a real person would give.

He has learned to do this from people who were self-aware, and so their responses are similar.

Among the scientific and academic community it is a matter of concern because the more we advance in the development of artificial intelligences that act like humans, more situations similar to LeMoine's will occur.

It's what Marcus calls the

Credulity Gap

, a modern version of pareidolia, a psychological bias where a random stimulus is mistakenly perceived as a recognizable shape.

Defining what consciousness is and where it comes from in our species is already complex in itself, although many experts say that language and sociability are key parts of the process.

But knowing if it happens inside a machine from a set of code, or what to do in the event that something similar to consciousness emerges in an artificial intelligence is an ethical and philosophical debate that will last for several years.

It's one reason ethicists have discouraged Google and other companies from trying to create human-like intelligences.

In this case, it has not helped that

Blaise Agüera y Arcas

, Vice President of Google, has recently stated in an article for The Economist that neural networks are "increasingly approaching a level that seems to indicate consciousness", although in his article he did not goes so far as to say that LaMDA has managed to reach that level.

Conforms to The Trust Project criteria

Know more

  • Google