Enlarge image

OpenAI logo: “Slight climbs”

Photo: Dado Ruvic / REUTERS

OpenAI, the company behind ChatGPT, has investigated whether its most advanced artificial intelligence to date can be misused to develop bioweapons. The result of the in-house study: The GPT-4 language model can make the development process a little easier, but the difference to someone who can only use the Internet is not statistically significant.

The investigation was carried out in the form of a comparative test. 100 experts and biology students were divided into two groups. In one group, participants were given access to a non-publicly available version of GPT-4 in which certain security features and filters were disabled. The control group only had access to the Internet. Five metrics were used to compare whether access to GPT-4 is beneficial. In the metrics “Accuracy” and “Completeness,” OpenAI found “slight increases” in the values ​​in the group with access to the AI ​​model. However, the extent of these effects is not large enough to be considered statistically significant. The company also emphasizes that it is not enough to have access to the relevant information to develop a bioweapon. However, the implementation in practice has not been examined.

The priority was the question of whether GPT-4 could help to recreate an already known biological threat. The development of previously unknown substances was not the focus. But almost two years ago, the pharmaceutical company Collaborations Pharmaceuticals caused a stir when it had a specially programmed AI model digitally design new chemical warfare agents that were as toxic as possible. The system generated 40,000 such molecules in less than six hours.

»Online sources and databases have more dangerous content than we thought«

OpenAI interprets the results of its study in such a way that access to the special version of GPT-4 " can

increase the ability of experts to access information about biological threats

." However, they are "uncertain" how significant the observation is is.

CEO Sam Altman's company describes the investigation as a "blueprint" for the development of an early warning system and a starting point for further research. The underlying idea is to develop “evaluation methods for AI-related security risks” because “as OpenAI and other developers build even more capable AI systems, the potential for both useful and harmful use will increase.” The method developed for this study could be imagined as a “tripwire” to be warned in good time if it becomes apparent that AI will become more helpful than mere Internet access in developing dangerous materials such as viruses.

In any case, it is already “relatively easy” to obtain such information, even without GPT-4: “Online sources and databases have more dangerous content than we thought,” writes OpenAI. "Step-by-step methodologies and troubleshooting tips for developing biological threats are just a quick Internet search away." Nevertheless, bioterrorism is "still relatively rare." This shows how important, in addition to pure information, is access to laboratories and specialist knowledge in microbiology and virology.

pbe