Paris (AFP)

A race against the clock: researchers who try to detect "deepfakes", these videos manipulated to replace a face or change the words of a personality, face increasingly sophisticated falsification techniques, and more accessible to the general public .

Alert on the Reddit website forums in 2017: Fake pictures of movie actresses in pornographic movie scenes are shared by a user. Only faces have been replaced. If the formerly artisanal technique worries, it is because it uses this time tools from the artificial intelligence to manipulate a video in a convincing way.

Since then, sometimes humorous creations have spread on the internet, representing for example the creator of Facebook Mark Zuckerberg seeming to say "Who controls the data controls the future". But for researchers, the tone is no longer a joke.

"Manipulations can affect the audio or the video.We are getting to the audio plus the video.I wonder what will happen for the next big elections," said AFP Vincent Nozick, Lecturer at the Gaspard Monge Institute of Paris-Est Marne-la-Vallée University.

"To create a + deepfake +, the only skill required is a bit of experience, the first will be a priori missed because you have to choose the right computer model (...) but someone who has done three months, it is good he is ready, "adds the researcher.

- Imitating the voice of a CEO -

In India, a journalist and a parliamentarian have been targeted by obscene videos trafficked. In Belgium, the Flemish Socialist Party represented US President Donald Trump urging Belgium to withdraw from the Paris Agreement on Climate Change. The warning message of deception has not been understood by many people.

In late August, the Wall Street Journal relayed the use of artificial intelligence by crooks to emulate the voice of a CEO and get the transfer of more than 220,000 euros.

Finally, the Chinese Zao application released this summer allows to insert his face in the place of an actor in a movie clip from just a few photos. This development marks the arrival of this technology in the hands of the general public.

To detect manipulations, several tracks are under study. The first, which only applies to personalities already widely filmed and photographed, is to find original images prior to manipulation, or to compare the suspect video with the usual "gesture signature" of the person.

One second focuses on the faults generated by the special effects (an incoherence in the blinking of the eyes, the arrangement of the hair or the sequence of the images) but the technologies adapt and gradually "erase" them.

The third track is to train artificial intelligence models to detect only the videos that have been tampered with. The success rates are very good, but depend on the available examples. "A detector of + deepfake + that worked well a year ago will not necessarily work on those of this year," says Vincent Nozick.

- Bases of counterfeit content -

"The machine can perceive things that we do not see with the naked eye, but we need to have databases to evaluate how effective we can be, which is currently lacking," he says. Ewa Kijak, lecturer at the University of Rennes 1 - Irisa laboratory.

Giants Facebook and Google, whose platforms are regularly criticized for their role in disinformation, have announced they want to help by providing counterfeit content databases.

But the battle is just beginning: new "deepfakes" are using "enemy generative network" technology (GANs) to assess their detectability before they are even published. In short, they self-test.

More bluffing, or more disturbing: a team of German academics has been working since 2016 on "puppetry" software.

It's no longer a matter of sticking one's face to that of a star in a Hollywood blockbuster, but of "animating" the face of a personality with mimicry and invented words, which could, for example, make it possible to produce a fake press conference of a head of state, all live.

Faced with such technologies, the "provision of detection tools will not be enough," says Ewa Kijak who calls for "an awareness": "Until now we could have a little more confidence in videos ( than in other content.) Now I think it's over. "

© 2019 AFP