Enlarge image

ChatGPT on a laptop and a smartphone: Microsoft and OpenAI internal investigations

Photo: Hannes P Albert / dpa

State-backed hackers from China, Russia, North Korea and Iran are apparently increasingly relying on artificial intelligence (AI) in their attacks. Internal investigations have revealed that these groups are using OpenAI's ChatGPT to refine their methods, Microsoft said on Wednesday. The software company is OpenAI's most important partner and investor.

The hackers mainly used the technology to automate their software development. Some of the group also used it to translate technical documentation and search for publicly available information. Russian cybercriminals with suspected connections to the local secret service, for example, researched "various satellite and radar technologies that could relate to military operations in Ukraine." China is questioning AI about foreign secret services or individual personalities.

The Iranian and North Korean hackers also had the AI ​​write texts for phishing attacks. In such attacks, victims are tricked into entering their login information on fake websites using emails that look deceptively real.

Current AI is only partially more useful than traditional tools

OpenAI said the discovery confirms the assessment that current AI technology is only partially more useful for developing cyberattacks than conventional tools. However, the accounts of five hacker groups have been closed. Microsoft also emphasized that it had not yet seen any new attacks using AI.

"Regardless of whether there is a violation of the law or of the terms of service, we simply do not want the actors we have identified to have access to this technology," said Tom Burt, vice president of customer security at Microsoft, in an interview with Reuters. However, he did not want to comment on the extent of these activities or the number of blocked user accounts.

China criticized the allegations as “baseless slander,” and there were no reactions from Russia, North Korea and Iran. Western experts have long been warning about the misuse of AI by criminals. However, there is so far only little evidence of this.

pbe/Reuters/dpa