"Alexa, do you love me?" The question is repeated, innocent and almost like a game, on all Amazon devices that have artificial intelligence. What will happen when virtual assistants stop responding with prefabricated phrases and integrate into our day to day ? When the "yes, but as a friend" gives way to an answer based on all previous interaction between user and device, will it be too late to ask if we are interested in our machines developing feelings?

"There is a very famous essay entitled Robots should be slaves that raises exactly that. The fact that they have feelings could become a problem for us , because they think that if they have conscience then we have to behave in a moral way with them. From a From a practical point of view, is that what we want? Or do we want tools? "says Anders Sandberg, professor of Cognitive Robotics at Imperial College London, during his interview with ELMUNDO. "It is important that if, for example, an autonomous car is destroyed to save its occupants during an accident, we do not feel sorry for it, that it is expendable, because if we endow it with feelings then we will have a new member in the family to worry about ".

In his book Pulse update , published at the end of 2017 (Harper Collis), Satya Nadella, CEO of Microsoft, explained that the efforts of technology companies like his in the coming years would be aimed at perfecting artificial intelligences , not only in their practical functions but also in their emotional dimension. "The challenge we face is not only to get them to have adequate intelligence, but also to possess human qualities: emotions, ethical codes and empathy," explained the businessman, who then calculated that by 2021 the sector would move more than 14,000 million Euros every year.

The piece 'What a loving and beautiful world by TeamLab.BARBICAN

Right now, and according to a recent study by the Code Computer Love agency in the United Kingdom, what the British are most asking Alexa is to put them on music, give them the news and help them to cook an egg, but in reality what the participants would like most to do would be, in addition to helping them learn languages, to help them to tell jokes, to teach them how to look funnier and more attractive and to organize appointments.

The AI: more than humans exhibition, which is exhibited right now in the Barbican gallery in London , raises, through the work of several artists, some of the ethical questions surrounding the world of artificial intelligence. One of his pieces, that of Lauren McCarthy, an associate professor at the University of UCLA, places the artist herself as an improved version of Amazon's Alexa to try to discover if a more human conscience would benefit personal assistants.

She puts herself in the place of LAUREN and, connected to the home of several people who lend themselves to the project, helps them 24 hours a day with their daily tasks. Turn on the lights, schedule calls and do everything that virtual assistants do at the moment, but also make decisions and anticipate the needs and preferences of its users observing their patterns as a friend or relative would. "LAUREN has recommended that I cut my hair every three weeks and I think it has been good for me, I will raise my self-esteem," acknowledges one of the participants while shaving in the bathroom of his house. Another is not so clear: "I do not know if I like the feeling, it is one thing to feel that you have support and another that you do not have control ."

A girl plays with a dog-shaped robot in the exhibition.

"It's one thing to carry a phone with you all the time, but you can leave it in another room. Having your whole house taken by these devices ... has a much bigger implications," McCarthy says of his work. "Sharing your personal space with another being requires proper interaction with him. If you have a dog or a cat, for example you learn a little about his habits and they also learn about yours, and you interact most of the time correctly with each other. We negotiate in some way the coexistence in that shared space. When it comes to living with artificial intelligence this will have a bit of a trap , because we might not understand each other, "says Sandberg, whose work served as inspiration for the movie Ex machina.

Within the field of robotics, a term known as the uncanny valley is used - or the disturbing valley in its Spanish translation - that argues that, when trying to reproduce anthropomorphically the human being in a way too faithful, this causes rejection and even repulsion in the people who see it . It is what happens, for example, with some cartoons too realistic or with the physical expressions of the robots that imitate us.

However, as the Barbican exhibition portrays, there is a whole generation raised with the possibility that there are conscious and sentimental machines through series such as Transformers or characters like Doraemon , the cosmic cat, so that discomfort would be Seeing reduced. In addition, the development of artificial intelligences, which "are still light years away from reproducing the human brain," according to Sandberg, would satisfy one of our species' primary needs: to create life. "If you can create something, it is because you can explain it, and perhaps that is why this issue haunts us. It is possible that we feel a little alone as a species, because all the minds that surround us are human and we need a second opinion," aim.

According to the criteria of The Trust Project

Know more

  • culture
  • art
  • Exhibitions
  • London

ArteÓscar Murillo: the great promise of British art is a humble emigrant from Colombia

Culture The most sincere inn of Felipe VI and the Princess of Asturias

CultureThe 'poetic geometry' of Luis Caruncho