An increasing number of fake faces are being created using artificial intelligence, to give a new dimension to misinformation and electronic attacks, knowing that they cannot be detected with the naked eye on social media.

A year ago, the Facebook team succeeded in dismantling a network of more than 900 fake groups, pages and accounts on Instagram and Facebook.

These fake accounts have used dozens of non-existent faces and were created by algorithms for artificial intelligence.

In an article published in the French newspaper "Le Figaro", Ingrid Vergara said that, unlike "deep fake", which consists of installing a person's face on another person's face in a picture or video clip, a "fake face" is a face that does not really exist where the features originate from Zero by an AI algorithm.

Thanks to the latest advances in technology, this type of image and detail is difficult to detect with the naked eye.

This tool is classified under the category of learning algorithms, and faces are generated using two types of algorithms.

According to Vinson Parra, a research professor at the University of Clermond-Auvergne, "To understand how these networks work, imagine these two situations: an art forger named Girard whose goal is to produce as real paintings as possible; and an art expert named Daniel whose job is to distinguish real paintings from fake and put them in the correct classification. Here, Daniel is deceived into making his paintings as real as possible, while Daniel tries not to make any misclassification. "

Boosting 'confidence'

The character generation algorithm was invented in 2014 by machine learning researcher Ian Goodfellow.

Technological advances and the democratization of technology today have made the growth of this technology easy.

An engineer at Uber in 2019 provided an illustration of how this algorithm works to raise awareness of these possibilities, through the This Person Does Not Exist.com platform.

The phantom creation algorithm is currently used in certain industries, such as video games and advertising photography, and by marketing agencies.

The author indicated that the emergence of these fake faces increased in disinformation campaigns on social networking sites such as Facebook and Twitter, and on many other sites.

The goal of their use in general is to add credibility to the content by personifying an (imaginary) person or to enhance trust by instilling the impression of interacting with a real person.

In cooperation with the company "Graphica", to analyze phenomena on social networks, last September, Facebook removed a number of fake accounts operating in China and the Philippines, which were found to be using dozens of fake faces.

According to an analysis issued by the company "Graphic", this type of artificial intelligence algorithms can be easily accessed via the Internet, and its use in covert operations has increased over the past year.

Fake faces are also being used in Russia to create dummy bloggers with divisive topics.

"Deep fake" is the attachment of a person's face to another person's face in a picture or video (social media)

Cyber ​​attack campaign

These fake faces can also be used in AI-enhanced cyberattack campaigns.

Cybersecurity firm Darktrace explains in its white paper, “Chatbots have become friends with employees of targeted organizations on social networks such as Twitter, LinkedIn, Instagram and Facebook, and are aware in advance of the types of profiles they are looking for. They interact with people in organizations, using pictures. A non-existent persona created by artificial intelligence, instead of reusing pictures of real people. "

"Artificial intelligence is able to capture as much valuable data as it can and harm companies, public institutions and even governments," says Max Heinemer, director of threat detection at Dark Trace.

For example, in June 2019, the Associated Press succeeded in uncovering a fake profile in the name of Katie Jones in which an artificially created face was used on the LinkedIn professional network, claiming to be a member of a thinking cell, in an attempt to infiltrate the networks with the purpose of Espionage.

Precise pixel analysis

To uncover these scams, AI should be used as well.

In order to achieve this, a French company is developing algorithms that can detect the integrity and authenticity of content, including the fake faces created by the generative antagonist network.

"We are able to accurately analyze pixels, different lighting conditions and colors, and predict whether the face has changed, and we also rely on the typical constants of the face," the author quoted the founding partner of the startup, Julian Mardas.

Mardas adds that research progress in this area is rapid and continuous.

According to him, there are approximately 200,000 people who specialize in creating deep faces and fake faces.

Criminal organizations were able to open an account in an Internet bank by creating an identity document and a video of a fake face.