The recent high-profile applications for generating text, image, audio and video make it possible for anyone to create content that looks reasonably good. Open AI, which is behind the chatbot Chat GPT, notes in an analysis that the technology can drive down the cost of conducting influence operations so that more actors can afford to abuse AI for disinformation purposes.

Spreading propaganda

There is no shortage of examples that they are right.

American research company Graphika, which studies online disinformation, shows in a recent report how a pro-Chinese influence group uses the technology. The Financial Times recently reported on similar propaganda dissemination using synthetic video in Venezuela in support of the Maduro regime.

"In Venezuela's information desert, disinformation is thriving and now the technology exists to create convincing fake news videos," Adrián González, representative of the organization Cazadores de Fake News, which exposes disinformation, told the newspaper.

In several of the cases, the videos have been created with a tool from the London-based company Synthesia. Their application makes it possible to quickly and easily create video with avatars, which are pre-recorded actors who can be made to say whatever you want in over 120 different languages

Synthesia states that it is against the company's terms of use to spread false information and that those who abused the service have been suspended.

Create disinformation quickly

Kristian Rönn, with a background in computer science and artificial intelligence, is a tech entrepreneur in sustainability measurement and benefits from AI in his daily work. But he is concerned about how the technology can also be used for other purposes.

"Anyone can ask an AI to write a million fake news articles, more than any journalist will ever be able to fact-check. Disinformation on a scale that we don't know how to deal with.

Watch SVT's Alexander Norén's AI avatar in the clip above.