According to the New York Times, on May 6, more than 1 industry executives, researchers and engineers in the field of artificial intelligence signed an open letter in which they warned that AI may bring "extinction risks" to humans.

The letter's one-sentence statement states: "Mitigating the risk of extinction posed by AI should be a global priority, as should dealing with other social-scale risks – such as pandemics and nuclear war." ”

Infographic: ChatGPT.

The letter, published by the nonprofit Center for AI Safety, included executives from three leading AI companies among the more than 350 signatories: Sam Altman, CEO of OpenAI, Jamis Hassabis, CEO of Google DeepMind, and Dario Amodori, CEO of Anthropic.

The New York Times said that recently, the progress made in large language models, namely artificial intelligence systems used by ChatGPT and other chatbots, has raised concerns. There are fears that AI could soon be used on a large scale to spread misinformation and propaganda, or potentially eliminate millions of white-collar jobs.

In May, Altman, Hassabis and Ammodai met with U.S. President Joe Biden and Vice President Harris to discuss AI regulation. In Senate testimony after the meeting, Altman warned that the risks of advanced AI systems were serious enough to require government intervention, while calling for regulation of the potential harms of AI.

On May 2023, 5, local time, OpenAI CEO Samuel Altman attended a hearing held by the Privacy, Technology, and Law Committee of the Senate Judiciary Committee.

Previously, Altman and two other OpenAI executives proposed several ways to responsibly manage powerful AI systems. They called on leading AI manufacturers to collaborate, conduct more technical research on large-scale language models, and form an international AI safety organization similar to the International Atomic Energy Agency.

Altman also said he supported rules requiring makers of large, cutting-edge AI models to register for government-issued licenses.

"I think if something goes wrong with this technology, it could be very wrong." Altman told the Senate subcommittee. "We want to work with the government to prevent this from happening."

In March, more than 3,1000 tech leaders, technologists and researchers, including Elon Musk, reportedly signed another open letter calling for a six-month moratorium on the development of the largest AI models, citing fears that "the race to develop and deploy stronger digital minds is out of control."