Human-robot love, soon a reality?

-

© Studiostoks / Shutterstock (via The Conversation)

  • We think we make all of our decisions, but AIs are making more and more for us, according to a study published by our partner The Conversation.

  • Today's AIs obey a "specialized intelligence" that allows them to act on specific tasks or extremely precise objectives.

  • The analysis of this phenomenon was carried out by a researcher in the economics of innovation, a researcher in French and comparative literature of the 20th century and a researcher in computational neurosciences (all three at the University of Bordeaux).

The current fantasies associated with artificial intelligence (AI) have their origin in science fiction (SF).

Long before we talk about AI, long before Alan Turing's discoveries in the 1950s. Long before that, therefore, robots - from the Czech "robota" meaning "chore" - had already invaded SF literature.

In 1883, agricultural robots or “atmophytes” by Didier de Chousy in

Ignis,

or androids by Karel Čapek in 1920 in his play

RUR

, offered a first representation of the close link between robots and the autonomy of thought.

In

RUR

, the robots end up revolting against humanity.

Dominant from the start, this destructive and apocalyptic representation is not the only one that will be conveyed by SF.

Between Frankenstein complex and "machinic empathy"

Robots and AI mingled, SF has built an ambivalent relationship with machines, both based on the so-called “Frankenstein” complex and on machinic empathy.

For the writer Isaac Asimov, the Frankenstein complex, that of the revolt against humanity, finds its source in the work of Marie Shelley (written in 1816) where the artificial creature kills its creator.

A century later, one of the first cinema robots, Q l'Automaton (

The Master Mystery

, 1918), also refers to this idea.

"Q" The Automaton at the Houdini Museum (New-York) / Fantasma Magic

In contrast, machine empathy is the idea that a machine has an emotional connection with humanity, that it would do anything to protect us (

AI

, child AI,

Wall-E

, etc).

The first robot of this type to emerge in SF is the character of Adam Link (in

Amazing Stories

from 1939 to 1942) created by the brothers Binder, Earl and Otto.

Isaac Asimov will be inspired by them to propose benevolent robots towards humanity in short stories dedicated to metal heroes governed by the three laws of robotics:

1 - a robot cannot harm a human being nor, remaining passive, allow a human being to be exposed to danger

2 - a robot must obey the orders given to it by a human being, unless such orders conflict with the first law

3 - a robot must protect its existence as long as this protection does not conflict with the first or second law

He will go even further in

L'Homme bicentenaire

(

Stellar Science fiction

, 1976): this is the robot who wants to become human in order to integrate our mortal condition and abolish his immortality as an indestructible machine.

Detail of the film Planète interdite (“Forbidden planet”, by F. McLeod Wilcox - 1955)

The fictional representation of AI also evolves according to the context of scientific advances.

Today's SF no longer reproduces, for example, this image of a giant, centralized and omnipotent computer but rather this dematerialized AI, present in small units, everywhere and nowhere at the same time, as in

Spike's

HER

Jonze (2014) or in

Les Machines fantômes

by Olivier Paquet (2019).

AI is now represented as an intangible entity that invades the world.

We swim here in full reality because AIs are now everywhere (personal assistant, car, phone, etc.).

Thus, often, the imagination exceeds reality because it is based on an unsurpassable postulate, that of an AI having surpassed humans.

AI, one intelligence among others

Intelligence can be defined as the ability to use past experience to adapt to a new situation.

Take the example of Alfred.

If Alfred makes the same mistake x times in the face of the same situation, we perceive the problem.

If, on the other hand, Alfred is able to adapt quickly to a change in his environment to carry out a task using resources, not immediately offered, but which call on what he has learned elsewhere, then Alfred shows intelligence. .

AIs are one kind of intelligence among many, and like all intelligences, they make mistakes in finding the right solution.

Self-correction and continuous improvement allow us to evolve while learning from our mistakes.

Same fight for AI, with machine learning allowed by access to a massive amount of data since the 2010s. More specifically, the machine looks for links between the data collected to categorize it.

She then shows "intelligence", like Alfred.

But it is not independent of human action, to have access to data but above all to react to “a new problem”.

"Artificial intelligence" dossier

Take the case of an autonomous car.

To adapt to the road, these vehicles “read” the landscape, road signs, etc.

However, it only takes a small detail to deceive the AI, for example a small sticker placed on a road sign.

Our AIs are not yet efficient enough to avoid this type of decoy, hence the need for a human driver to deal with this type of eventuality.

General intelligence and specialized intelligence

The belief in fictional representations of AI also stems from the fact that the general public does not distinguish between two kinds of intelligences, the one considered "general" and the one called "specialized".

We speak of general intelligence to designate an almost infinite and very rapid capacity for learning and adaptation.

It allows decisions to be made while placing oneself in a moral context.

Human, or animal, intelligence is general intelligence.

Specialized intelligence refers to the ability to act on specific tasks or extremely precise objectives.

This is where we are today when it comes to AI.

AIs are systems that are trained to perform specific, increasingly complicated tasks.

These systems seem intelligent to us.

Some algorithms can actually become experts in a very specific field (facial recognition, chess, etc.) but they only know how to do that.

Your GPS, for example, will never be able to understand images overnight, facial recognition will not be able to plan your route ... well not without having been explicitly programmed.

AI today lacks the adaptability required to make autonomous decisions.

What to think of the situation imagined by A. Proyas in 2004 in his film

I-Robot

where the robot "Adam" prefers to save the policeman rather than the little girl because he had a better survival capacity?

Detail of the film I-Robot (A. Proyas - 2004)

Can AI become smarter than humans?

AIs smarter than ordinary humans, is it even possible?

To surpass humans on an intellectual level implies that AI would be able to make decisions for us.

Obviously, the theme is a source of inspiration for science-fictional works which address here an essential question to which science has not yet an answer: what goals do machines pursue?

The theme of annihilating humanity to create a machinic civilization is found in The

Matrix

(where machines unwittingly enslave humans to use their bodies' heat and electrical activity as a source of energy).

Wall-E

offers a more empathetic reading of AI decision making.

As early as the 1980s,

Tron

delivered this vision of creating a perfect world, free from human errors, a sort of first cyberpunk utopia for machines and programs, but hell for humans.

Detail of the film Wall-E (A. Stanton - 2008)

The opinions of experts on the issue still converge on one point: “it is not for today!

»And for tomorrow?

In the current state of knowledge, nothing is less certain.

The lack of adaptability makes it possible for an AI to be “superior” to us, but only in specialized areas (as in the aforementioned example of the game of chess or the game of Go).

We have to twist our necks on another myth, the one that predicts the possibility of a great replacement of humans by a super-AI.

This event would occur after what is called the Singularity, a unique moment of an autonomous and accelerated evolution where the AI ​​would surpass the human in intelligence and take in hand its own destiny as well as ours.

Some go so far as to say that it could lead to the enslavement or, worse, the extinction of humanity!

The problem with the Singularity is that it comes up against the same limit as general intelligence.

However, this “thesis” goes beyond the realm of the imagination to feed false beliefs that are anchored in reality.

Should we be afraid of AI?

These myths mask the real risk of an AI which is already present everywhere in our daily life via smartphones and other “smart” objects.

They assist us in our choices by providing proposals based on our preferences (by having access to the data that we store, sometimes without knowing it).

We think we have some decision-making power, but in the end, it's the machine that makes the decisions.

The human being is done, and therefore voluntarily, gradually stripped of his critical faculty.

Some see it as a form of manipulation, whether to generate more profit for the market, or to take control over our lives.

AI is already being used not only to predict choices and behaviors but it is also being used to influence them.

"Whoever becomes a leader in this area will be the master of the world," Vladimir Poutine tells us.

In the end, the question that we should, democratically, ask ourselves and the following: for what reasons do we want AI?

Because we know how to do it?

Because we don't know how to do it?

Because others are doing it?

AI is not a natural phenomenon that would impose itself on us.

AIs are computer tools like any other and therefore should only be designed in response to explicit needs and provide all the elements necessary for their understanding and use.

Computer science and AI are means, not ends.

And these are not necessarily, everywhere and always, the best means.

Science

From Descartes to Einstein, how the study of light changed our representation of the world

Science

Anti-gravity: how researchers managed to float a boat upside down

This analysis was written by Marie Coris, teacher-researcher in economics of innovation, Natacha Vas-Deyres, researcher in French, French and comparative literature of the twentieth century, specialist in anticipation and literary and cinematographic science fiction and Nicolas P. Rougier, researcher in computational neurosciences (all three at the University of Bordeaux) / with the participation of Karen Sobriel, student in Master 1 Science mediation, as part of her professional internship at the research department in SHS Changes from the University of Bordeaux.

The original article was published on The Conversation website.

  • Future (s)

  • Robot

  • Cinema

  • Artificial intelligence

  • Video

  • VISIONS

  • High Tech