Information-disinformation: identifying images generated by AI, mission impossible?

A photo of a tweet shows an AI-generated image of Donald Trump being arrested by police officers. AP - J. David Ake

Text by: Pierre Fesnien Follow

7 min

With the rise of artificial intelligence, images generated by AI are becoming more and more common on the internet. Widely used by fake news propagators, their disturbing realism sometimes makes them very difficult to identify. But there are clues...

Advertising

Read more

Pope Francis in a white jacket, Donald Trump arrested by police officers or more recently an old man with a bloodied face arrested during a demonstration against the pension reform... The development of artificial intelligences is now a blessing for the propagators of false information - or simple jokers - who use them to produce ultra-realistic images yet invented from scratch.

Among these clues, we can note the absence of distinctive signs of the French police (colored bands, insignia) or the strange appearance of the visor visible at the top left of image 3/... pic.twitter.com/oWfWjipcUt

— AFP Factual 🔎 (@AfpFactuel) March 30, 2023

Software such as Midjourney, DALL-E or Stable Diffusion are thus able to generate an infinite number of snapshots from a huge database constantly fed by user requests. These images that look quite realistic at a glance are confusing, especially when they are related to current events, but further analysis can - sometimes - identify them.

Logo and reverse search

To create these images, nothing could be simpler. In software like Midjourney, all you have to do is type a written request into a search bar to generate, from millions of images, a new image created, pixel by pixel, by artificial intelligence. The result can be stunningly realistic, but some imperfections can remain and put the chip in the ear.

The first element that can indicate that a photo has been generated by artificial intelligence is the signature that can be found in the lower right corner of the image. For DALL-E software, for example, it is a multicolored rectangle, but this landmark can easily be removed by malicious people by cropping the image. Another solution is to perform a reverse search in a search engine by dragging the image in question into the search bar to find its past occurrences and find its source.

Focus on details

But the best way to spot an image created by artificial intelligence is still to open your eyes wide by focusing on details. For example, AIs still have a hard time generating reflections or shadows. The grain of the image is often very particular and the backgrounds are usually very blurred and if there are texts, they do not mean anything.

"You have to find the inconsistencies in the details. These are often photos that, at first glance, are very realistic, but when you look at them better, there are often problems, analyzes Lise Kiennemann, journalist for the site Les Observateurs de France 24, which works on these themes. Texts are problematic because AI can't generate them well. Another clue is the faces in the background that are pretty badly done. They are blurred faces, not quite formed.

 »

By dwelling on the fake photos of the arrest of Donald Trump shared by the founder of the Bellingcat website, Eliot Higgins, we realize for example that what is written on the caps of the police does not mean anything, that the former American president carries a baton and that there is an inconsistency in his lower limbs since he seems to have three legs. So many clues that make it possible to say that these are images generated by an AI, especially since at the time of their publication, Donald Trump had still not been arrested.

Making pictures of Trump getting arrested while waiting for Trump's arrest. pic.twitter.com/4D2QQfUpLZ

— Eliot Higgins (@EliotHiggins) March 20, 2023

Image generators also often create asymmetries with disproportionate faces, with ears at different heights. They also have trouble reproducing teeth, hair but also fingers. In early February, images of women hugging police officers during a protest against the pension reform went viral, but they were quickly identified as false since on one of them, the policeman had... six fingers.

The hand strikes again🖖: these photos allegedly shot at a French protest rally yesterday look almost real - if it weren't for the officer's six-fingered glove #disinformation #AI pic.twitter.com/qzi6DxMdOx

— Nina Lamparski (@ninaism) February 8, 2023

Towards perfection

Artificial intelligences can therefore still be improved, but at the speed at which they evolve, it could very quickly become impossible to differentiate images generated by AI from real images. "Midjourney, they're in V5. The difference between V1 and V5 in a few months is absolutely stunning. We may think that in a few years maybe, but I think rather a few months, we will no longer be able to tell the difference, "says Guillaume Brossard, specialist in disinformation and founder of the site Hoaxbuster. Midjourney is itself overwhelmed by the scale of the phenomenon since the site announced on March 30 that, crumbling under "extraordinary demand and abuse of trials", it was suspending its free trial version.

As a corollary to this evolution, images sow doubt even though they are very real. The photo of a young woman arrested in Paris on the sidelines of the demonstrations, for example, was immediately identified as the creation of artificial intelligence by Internet users, until the author of the photo confirmed that it was real and that other images of the arrest, taken from another angle, corroborate his claims.

It gets really fascinating and very scary at the same time. We are already entering an era of hyper-mistrust (or hyper bad faith?) where any photograph we refuse to believe in is necessarily generated by an AI. Confirmation bias will explode. pic.twitter.com/3lmVMgCoGy

— Guillaume Champeau (@gchampeau) March 30, 2023

"We can believe that real images are actually AI and we can believe that AI images are real, so the boundaries are already very blurred and they will disappear a little more in the coming months," says Guillaume Brossard. But there's one thing that AIs don't know how to do and that they're not close to knowing, I think, is to reproduce a scene from multiple angles and that's a very good clue.

 »

Disinformation in a new era

Therefore, finding images of an event taken from different angles is a good way to check if an image is real. Tools like the Hugging Face app can also determine the probability that an image is AI-born, but their reliability remains relative and it should not get better.

Faced with these new technologies that will bring disinformation into a new era, the best behavior to protect ourselves remains to constantly question the images that we can see, especially those that will seek to strike a chord with our emotions by trying to scandalize us. According to Guillaume Brossard, it is one of the main drivers of disinformation and "as soon as an image generates an emotion, it is imperative to ask the question of whether it is not potentially doctored in one way or another".

With the meteoric improvement of artificial intelligence, however, it is not sure that this is enough to fight against the growing influence of fake news. "Today, people believe in what they want to believe. They don't care if what they are shown is true or not and that's the problem, laments the founder of Hoaxbuster. We are a bit in the continuation of what Trump had a little theorized with alternative truths and the era of post-truth. We are in the middle of it and we will have to learn to live with it.

 »

To counter this threat, media literacy remains an indispensable lever. Voices are also being raised to demand a "pause" in the development of artificial intelligence. Personalities such as Elon Musk, the boss of Tesla, or Steve Wozniack, co-founder of Apple, have written an open letter to call for the opening of a six-month moratorium on AI, which they consider an "existential issue" for humanity. A position shared by Guillaume Brossard: "There should be a moratorium, a bit like what was done at one time for nuclear weapons. Let humanity sit down a little and decide to add a kind of fingerprint that attests 100% that a file comes from an artificial intelligence." But the expert concedes, "given the stakes of disinformation today, it is really not won.

 »

Newsletter Receive all the international news directly in your mailbox

I subscribe

Follow all the international news by downloading the RFI application

Read on on the same topics:

  • Internet
  • New technologies
  • Media
  • Our selection