Enlarge image

Excerpt from the popular video game StarCraft II, which the US researchers used for their study

Photo: Blizzard Entertainment / AP

Scientists at the U.S. Army's Central Research Laboratory have been studying how their soldiers could take advantage of recent advances in artificial intelligence on the battlefield.

The researchers at the United States Army Research Laboratory used, among other things, the GPT-4 language model from OpenAI.

The model forms the basis of the popular AI chatbot ChatGPT, but can also be used for other applications.

In a scientific paper, researchers Vinicius Goecks and Nicholas Waytowich describe how they created a virtual assistant that could help with decision-making on the battlefield.

First, they taught the program to act as an assistant to a military commander.

They then fed the artificial intelligence, among other things, the military goals of a fictional scenario, their own military doctrine and information about the terrain and about their own and the enemy's armed forces.

The researchers did not carry out the experiment in a real military operation or maneuver, but in a video game.

To do this, they used an adapted map from the well-known video game StarCraft II and played through the scenario “Operation TigerClaw,” which involves destroying enemy forces.

They named their virtual assistant COA-GPT.

The abbreviation stands for Course of Action, a form of operational planning anchored in the US military.

Too much trust in AI

In their study, which has not yet been peer-reviewed, the researchers stated that COA-GPT is better than humans at quickly creating strategic operational planning.

However, the artificial intelligence could not act freely.

The scientists developed the program in such a way that a human has to check and approve the creation of the operational planning.

The researchers also found that GPT-4 performed better than other AI language models in their scenario.

OpenAI had already released some applications of its technology for military purposes in January, as the tech magazine “The Intercept” reported.

However, the technology may not be used to develop weapons.

Speaking to the science magazine New Scientist, which first reported on the study, expert Carol Smith warned that people could tend to trust the advice of such AI systems too much.

"I wouldn't recommend using a language model or generative AI systems in a situation where the stakes are high," said Smith, a researcher at the Software Engineering Institute at Carnegie Mellon University in Pennsylvania.

hp