San Francisco (AFP)

Google has released thousands of videos called "deepfake", hyper-realistic tricks say or do things to people, to allow researchers to work to identify these potentially dangerous content.

The internet search giant has explained in an online article that it has shot hundreds of videos with actors, then transformed into thousands of fake videos, using "publicly available + deepfake + generation methods".

This set of videos, true and false, can now be exploited by researchers to "develop methods of detection of manipulated videos".

Artificial intelligence (AI) technologies, based on automated learning of computers from large volumes of data, have made it possible to develop applications that are now used on a daily basis, such as voice assistants or medical diagnostics.

But some AI apps have hit the headlines, such as smartphone apps that were used to "undress" people.

"Although many videos + deepfake + are designed for humorous purposes, others could harm individuals or society," said Nick Dufour of Google, and Andrew Gully of Jigsaw, a research subsidiary of Alphabet, the home of Google.

Many regulatory authorities and associations are particularly concerned about possible repercussions on political polls, in the event that fake videos of politicians come to disrupt the campaign.

For now, the difference between an original video and a manipulated video can still usually be "done with the naked eye," said Hao Li, professor of creative technologies at the University of Southern California (USC), when an interview on CNBC on September 20th. "But there are also montages that are very very convincing," he said.

"We're able to make videos that look perfectly real today, the real question is how long before these hyper-realistic + hyperfake + production technologies are available to the general public. more than 6 months, "he said.

© 2019 AFP