Website of the ChatGPT provider Open AI:
Photo: Richard Drew / AP
ChatGPT provider OpenAI wants to assemble a new research team and invest significant resources to ensure the security of artificial intelligence (AI) for humanity. This was announced by the AI company on Wednesday.
"The enormous power of 'superintelligence' could lead to the disempowerment of humanity or even the extinction of humanity," OpenAI co-founder Ilya Sutskever and OpenAI's head of future direction Jan Leike wrote in a blog post. Currently, we have no way to control or control a potentially super-intelligent AI and prevent it from going its own way."
You can read the original blog here: Introducing Superalignment
The authors of the blog predicted that super-intelligent AI systems that are smarter than humans could be realized before the end of this decade. Humans will need better techniques than are currently available to control super-intelligent AI.
Microsoft will therefore use 20 percent of its computing power over the next four years to develop a solution to this problem. In addition, the company is in the process of putting together a new research team, the "Superalignment Team", for this purpose.
The potential dangers of artificial intelligence are currently being intensively discussed by researchers and the general public:
The EU has already agreed on a draft law to regulate AI, the "AI Act".
In the USA, following the example of the EU, there is a discussion about classifying AI applications into risk classes, on which the requirements for companies depend.
UN Secretary-General António Guterres is critical of the future of artificial intelligence. He proposes to set up a regulatory authority jointly led by the UN states.
In March, a group of AI industry leaders and experts signed an open letter pointing out potential risks of AI to society and calling for a six-month pause in the development of systems that were more powerful than OpenAI's GPT-4.
In May, hundreds of experts warned against artificial intelligence: "It should be prioritized globally to reduce the risk of extinction by AI – on a par with other risks for society as a whole, such as pandemics and nuclear war," it translates. Signatories include Sam Altman, CEO of OpenAI, Demis Hassabis, head of Google DeepMind, and Turing Award-winning AI researchers Geoffrey Hinton and Yoshua Bengio.