Ángel Jiménez from Luis USA

USA

Updated Wednesday, February 28, 2024-23:13

Google's new artificial intelligence,

Gemini

, is having a bumpy takeoff.

Just two weeks after being officially presented, Google has had to

pause some of its functions

after some of the offensive and hilarious results that it is capable of generating when asked to create images from a text description were published on various social networks.

In some cases the tool

has refused to represent white people in historical contexts that required it

or has inserted Asian or black people when asked to show images of Vikings, Nazis or medieval English knights.

This is not a problem exclusive to images.

The text generative engine has also been the focus of these attacks for its alleged biases and prejudices in the type of responses it offers.

In a statement,

Sundar Pichai,

CEO of Google, has finally acknowledged the problem.

"I know that some of his responses have offended our users and shown prejudice; to be clear,

that is completely unacceptable and we have made a mistake,"

explains Pichai.

Although there is no official justification, the suspicion is that in an

attempt to always present images that show people of various races,

Google has not well calculated the effect in specific contexts where these responses may seem absurd.

In the publications that have circulated on the Internet, the tool, for example,

imagined the founding fathers of the United States as people of black, Asian or Indian race

.

When requesting an image of the Pope, Gemini could also show an image of a woman in papal attire.

"No artificial intelligence is perfect, especially at this stage of the industry's development, but we know the bar is high for us and we will continue working on the problem for as long as necessary," Pichai adds in his statement.

The problem that Google was originally trying to solve, in any case, is real.

Today's artificial intelligence engines are trained with materials that can introduce biases and prejudices, and risk perpetuating them if measures are not programmed to eliminate them.

It is a question that worries many of the researchers, developers and academics involved in the creation of these new tools.

For Google, however, the case is

a blow to an already worn image.

Having developed some of the technologies that have made modern artificial intelligence possible in-house, the prevailing view in the industry is that the company is not innovating in this space at the right speed and its rivals, especially

OpenAI,

have managed to create much faster products. advanced.