Enlarge imagePhoto: DrPixel / Getty Images

With a strikingly short and general statement, hundreds of experts warned against artificial intelligence on Tuesday: "It should be prioritized globally to reduce the risk of extinction by AI – on a par with other risks for society as a whole, such as pandemics and nuclear war," it reads.

Signatories include Sam Altman, CEO of OpenAI, Demis Hassabis, head of Google DeepMind, and Turing Award-winning AI researchers Geoffrey Hinton and Yoshua Bengio. Taiwan's Digital Minister Audrey Tang, Microsoft's Chief Technology Officer Kevin Scott, musician Grimes and numerous AI experts from research and industry are also on the list.

The message was published by the Center for AI Safety in San Francisco. The »New York Times« was the first to report on it. According to the newspaper, the statement is intentionally kept so concise in order to be able to unite experts who would otherwise have different views on the concrete danger posed by AI or appropriate defensive measures.

Signed by those who develop AI themselves

At the end of March, celebrities such as Elon Musk and AI specialists had already published an open letter calling for a six-month break in training new AI systems that surpass the current OpenAI model GPT-4. "High-performance AI systems should not be developed until we are sure that their impact will be positive and their risks manageable," they said in their letter. At the beginning of May, AI pioneer Geoffrey Hinton followed up with his own warning after leaving his longtime employer Google. Other experts who have expressed their concerns about uncontrollable AI in guest articles or interviews in recent weeks and months are also under both the open letter from March and the new statement from Tuesday.

The warning now published by the Center for AI Safety is particularly noteworthy because many of the signatories are employed in senior positions at precisely those companies that are currently working at full speed on increasingly powerful AI models and applications.

Sam Altman, for example, said in an interview with SPIEGEL last week that he was "very concerned that biological warfare agents could be developed with the help of AI systems." But he does not want to draw red lines in development alone: "It is crucial that we determine the limits for this technology in a democratic process and retain control as humans. Incidentally, we also need clarity as a company and should therefore be regulated."

However, there is also dissent in the research community. Meta's AI chief scientist Yann LeCun, for example, who received the Turing Award together with Hinton and Bengio, has so far refused to sign any of the appeals. He refers to some of the warnings as "AI doomism".