UK first AI safety summit last November (French)

Britain's standards body has published a first-of-its-kind international standard on how to safely manage artificial intelligence, dpa reports.

The new guidelines in the UK set out how to create, implement, maintain and continuously improve an AI management system, with a focus on safeguards.

The British National Standards Authority has published guidance that provides guidance on how companies can develop and deploy AI tools responsibly internally and externally.

This comes amid ongoing controversy over the need to regulate fast-moving technology that has become increasingly prominent over the past year, thanks to the public release of generative AI tools such as Chat GPT.

The UK organised its first global summit on AI safety in November, where world leaders and major tech companies from around the world met to discuss the safe and responsible development of AI technology, as well as the potential and long-term risks the technology could pose.

According to the report, these risks included the use of artificial intelligence to create malware to launch cyberattacks, and it could even represent a potential existential threat to humanity if humans lose control of it.

Susan Taylor Martin, chief executive of the British National Standards Authority, said of the new international standard: "AI is a transformative technology, and trust is critical for it to be an important force for good."

"The publication of the first international standard for an AI management system is an important step in enabling organizations to manage technology responsibly."

Source: German