Enlarge image

Sundar Pichai at an appearance at the Munich Security Conference: Google wants to regain trust

Photo: Tobias Hase / dpa

In an internal memo, Google boss Sundar Pichai admitted errors in the latest update of its own AI tool Gemini and announced far-reaching changes in order to regain the trust of users.

He finds clear words: »I know that some of the answers have insulted our users and shown bias.

To be clear: This is completely unacceptable and we screwed it up," it says at the beginning of the circular, which the US magazine "Semafor" quotes from.

After an update from Gemini, criticism arose last week: When you asked the image generator about historical motifs, it showed, for example, Asian-looking women in a kind of Wehrmacht uniform.

Motifs with a black Viking and a black Viking woman, each with dreadlocks, were often shared on social media.

The text part of the software also attracted ridicule because it refused to answer the question of whether Adolf Hitler or Elon Musk had a worse influence on society.

In response, Google will no longer allow its tool to generate motifs with people until further notice.

Google should identify vulnerabilities before publishing them

Sundar Pichai assured in his circular to the workforce that the Gemini team has been working around the clock to resolve the issues.

In addition, there should also be structural changes: The process that products go through before publication should be revised.

Pichai calls this “red teaming,” in which targeted attacks are simulated to identify weak points.

At the same time, the manager assures that Google is in a good position to participate in the coming wave of AI applications.

Pichai does not comment on the exact causes of the problem in the letter.

Right-wing circles in particular suspect that Google Gemini has specified an excess of diversity.

In the past, Google had clear deficits in this area: A few years ago, the automatic keywording in Google Photos identified people with dark skin as monkeys.