【Today's Viewpoint】

◎ Liu Xia, reporter of this newspaper

Images of former U.S. President Donald Trump being pressed to the ground by heavily armed New York riot police have flooded social media platforms such as Twitter, but these seemingly detailed images have nothing to do with the truth, and these images are from artificial intelligence (AI)-driven image generation technology.

Experts warn that the images reveal a new reality: in the wake of a major news event, fake images and videos can flood social media and further obfuscate the facts, making it urgent to deploy and develop policies to regulate similar technologies.

It is difficult to distinguish whether the composite image is true or false

According to the website of Fortune magazine, these "Trump arrested" images were generated by Eliot Higgins, founder of the Netherlands-based open-source investigative media "Bellcat".

When Higgins saw the news about Trump's possible arrest, he decided to visualize it. To that end, he used the latest AI painting tool, Midjourney, to create images of Trump's arrest. He said the latest tool is much more sophisticated than previous versions and greatly improves the visual effects of images. He then shared the composite on Twitter: a picture of the former president surrounded by police, in which the badge was blurred. According to the website of the Washington Post, in just two days, the post published by Higgins was viewed nearly 500 million times.

AI experts say that while the technology for processing and generating fake images is not new, the pace of progress in technology in the field and the misuse of the technology is worth watching. Munir Ibrahim of digital content analytics firm Truepic, noted that "synthetic content is evolving rapidly, and the gap between real and fake content is becoming increasingly difficult to distinguish."

According to Fortune, various AI image generation tools are now at your fingertips, and they can quickly generate a large number of lifelike images after a simple command from the user. For example, Midjourney's text-to-image model now generates images that mimic the style of news organization photos, so these AI-generated images have the potential to "fish in troubled waters" and confuse the public in a chaotic news environment.

Jevon West, a professor at the University of Washington who focuses on the spread of misinformation, said: "It does add 'noise' during a crisis. ”

Professionals emphasize that the ability to produce fake but seemingly credible images in large quantities has improved tremendously and can be easily used for deceptive purposes. Many times, visual information is quickly forwarded without providing key context. In fact, a fake image post about "Trump arrested" garnered more than 7,9 likes, "as if the photos were real."

Technical crackdown on "deepfakes"

Higgins believes that as synthetic images become harder to tell from fake, the best way to combat mistaken visual information is to raise public awareness and education, and social media companies can focus on developing new technologies that can distinguish AI-generated images and integrate this technology into their platforms.

Twitter, for example, has policies prohibiting users from sharing deceptive and manipulative media content that could cause harm, such as tweets that could lead to violence, widespread civil unrest, or threaten personal privacy. In February, Twitter launched a "community note" feature that allows users to add comments below tweets and explain them in the form of long content. There is a view that "community notes" may be able to help Twitter become a more credible platform for more people to speak positively, and help reduce the proportion of fake news and misleading content, thereby attracting more advertising.

According to the Washington Post, since 2019, major technology companies have strengthened their policies to combat "deepfakes". In 2020, metaverse platform companies banned users from posting highly modified videos, but still allowed to post modified videos intended to parody or satirize.

However, some experts say the technology is becoming more complex and difficult to monitor. Moreover, none of these Internet giants have made any significant investments in how to detect these problems and how to enforce relevant policies.

Policy regulation is imperative

The proliferation of "Trump arrested" fake images on the internet also provides a case study that shows the current lack of corporate standards or government regulations to address the use of AI to create and spread lies.

Arthur Holland Michel, a researcher at the Carnegie Ethics Committee on International Affairs in New York, said he was concerned that the world was not ready for the overwhelming disinformation that was coming.

Experts agree that Trump's fame makes fake images easy to spot, but identifying fake images related to ordinary people can be difficult, and the technology to generate fake images has been improving. From a policy point of view, it is urgent to regulate the application of deep synthesis technology in the form of legislation.