A number of tech leaders have warned that artificial intelligence could lead to the threat of human extinction, stressing that its legalization and control should be given the highest priority.

The warning came in a statement from the AI Safety Center and was signed by several technology leaders, most notably Sam Altman, CEO of OpenAI, the developer of ChatGPT, as well as executives at DeepMind, Google's artificial intelligence arm, and Microsoft.

"Mitigating the risk of humanity's extinction due to AI must be a global priority alongside other societal risks such as pandemics and nuclear war," the statement said.

The technology has accelerated in recent months after the launch of chat GPT for public use last November and has since gone viral.

Within just two months after its launch, it had reached 100 million users.

Chat GPT has surprised researchers and the general public with its ability to generate human-like responses to users' questions, raising fears that AI could replace jobs and imitate humans.

The statement said there was growing debate about "a wide range of important and urgent risks that AI may cause", but added that it may be "difficult to express concerns about some of the most serious risks of AI".

The aim of the statement is to overcome this obstacle and open discussions about these risks.

Companies have previously been asked to discontinue training systems that try to be more powerful than the current version of ChatGPT-4 (Shutterstock).

Altman admitted in March that he was "a little scared" by AI because he feared what he called "authoritarian governments" could develop the technology.

Other technology leaders, such as Tesla's Elon Musk and former Google CEO Eric Schmidt, have warned of the risks artificial intelligence poses to society.

In an open letter in March, Musk, Apple co-founder Steve Wozniak and several technology leaders urged AI development labs to stop training systems trying to be more powerful than the current version of Chat GPT-4, OpenAI's latest large language model.

They also called for a 6-month pause before any advanced development of the technology could be undertaken.

"Contemporary AI systems are now human competitive in public tasks," the letter said.

The letter raised several questions: "Should we have the automation of all functions? Should we develop non-human minds that may eventually outnumber us, surpass human intelligence, and replace us? Should we risk losing control of our civilization?"

Former Google CEO Eric Schmidt also separately warned of the "existential risks" associated with artificial intelligence as technology advances.