Elon Musk and a group of artificial intelligence experts and industry executives called for a 6-month halt in the development of training systems for the newly launched GPT-4 model from OpenAI, and pointed out in an open letter the potential risks to society and humanity, according to a Reuters report.

The letter, issued by the nonprofit Future of Life, was signed by more than a thousand people, including: Tesla and Twitter Chairman Elon Musk, Stability AI CEO Imad Mustafa, researchers at Google's Alphabet subsidiary DeepMind, as well as important figures in the field of artificial intelligence such as Yoshua Bengiu, a Canadian computer scientist of Moroccan origin who is one of the most prominent contemporary computer scientists, and Stuart Russell. Computer scientist, engineer and university professor from the United Kingdom.

Experts called for a halt to the development of advanced AI until common safety protocols for such designs are developed, implemented and reviewed by independent experts.

"Robust AI systems should not be developed before we are confident that their effects will be positive and that their risks can be controlled," the letter said.

ChatGPT urges competitors to accelerate development of similar large language models (Getty Images)

The letter also detailed the potential risks to society and human civilization due to competitive AI systems in the form of economic and political upheavals, and called on developers to work with policymakers on governance and regulatory authorities.

The letter comes as EU police on Monday joined those warning of ethical and legal concerns about advanced artificial intelligence such as ChatGPT, warning of the potential for the system to be misused for phishing attempts, disinformation and cybercrime.

Musk, who uses artificial intelligence at the automaker Tesla, especially in the autopilot system, has been vocal about his concerns about artificial intelligence.

Since its launch last year, Microsoft-backed OpenAI's ChatGPT has urged competitors to accelerate the development of similarly large language models, as well as companies to integrate generative AI models into their products.

A Future of Life spokesman told Reuters that OpenAI CEO Sam Altman did not sign the letter.

Gary Marcus, professor emeritus at New York University who signed the letter, said it was "not perfect, but it holds a valid idea: We need to slow down the pace of artificial intelligence to better understand the implications."

"They can cause serious harm, big players have become more secretive about what they are doing, making it difficult for society to defend against the damage that may occur."