Although the research work on "deepfakes" began as early as 1997, when (1) a team led by Christopher Bolger, who was then a scientist at Interval Research, published a paper on how to modify human lip movements in videos so that It seems as if this person is saying different words from what he is really saying, as he did not attract the attention of the public around the world until 2018, on the seventeenth of April specifically, when the "BuzzVide" platform published a video of Barack Obama speaking in an unusual way and attacking Donald Trump.

It turned out then that it was the American artist Jordan Bell, who plays - voice - the role of Barack Obama, while the technique of deep fake (Deep Fake) works to adjust Obama's lips to conform with what Bell says, the video attracted the attention of tens of millions around the world, to the extent that some believed him, Although Bill - in the words of Obama - said: "This is a dangerous time, we must be vigilant about what we get from the Internet."

You won't believe what Obama says in this video 😉 pic.twitter.com/n2KloCdF2G

- BuzzFeed (@BuzzFeed) April 17, 2018

In 2019 another video of a famous American actor called "Bill Heder" imitating Tom Cruise, while the founder of the "Ctrl Shift Face" channel on YouTube worked on the installation of the entire face of Tom Cruise on the face of the artist while he spoke. The technology is available to the general public, which opened the door to tons of "deepfakes" videos, but a video appeared in early March 2021 by Chris Ohm, an American visual effects artist, in which he imitated Tom Cruise with the deepfakes technology with unprecedented precision that fooled millions on the platform. TikTok has caused a new wave of controversy over this technology.

The idea of ​​deep faking - to a degree of extreme simplicity that does not fulfill its right, of course - is the same old idea of ​​using Photoshop to fake a person who wants to improve his image on the Internet and transfer his face to the body of a bodybuilder, then adjust the image and upload it to his account on one of the platforms, but In the case of deep faking, artificial intelligence intervenes to use advanced facial recognition techniques and directs them to another domain, which is the composition of your facial features, which it will obtain from analyzing a large amount of your photos and videos, on another person's face in an animated video!

It is the natural development of falsifying pictures, but videos convince humans with a huge difference compared to pictures, in addition to that we have learned - over a great period of time that reached more than three decades - that images can be fabricated, this type of knowledge we do not acquire between day and night, rather it is necessary. That it be a general situation that permeates societies little by little, but the pictures are simple, and the videos are almost everything on the Internet, the worse of which is that the public is still unaware of the magnitude of the impact of technologies like these, do you imagine, for example, that the matter stops at the point of falsifying a video of Trump or Obama Or Mark Zuckerberg?

Imagine with us the following scenario, in one country there is a degree of turmoil due to an incident in which a police officer killed a citizen selling potatoes in the street for a trivial reason. Moreover, at a crucial moment, the Prime Minister of the State comes out to demand an end to the violence, or else "the state will use its iron hand as it did with the potato seller."

In normal times, it can take several hours or half a day in order to confirm the authenticity of this video, and the government declares that it was fabricated with "deep falsification" and things calm down, but in critical cases in time, one minute could be the difference between a revolution in a country calming down or That it continues to flare up unceasingly, in the previous case when the government announces that this video is fabricated and confirms this within four to five hours, for example, the level of violence will have escalated to the point of no return.

Let's now imagine another scenario. Jeff Bezos's "Blue Origins" company succeeded for the first time in launching a new missile that consumes a very low amount of fuel. During the day after this launch, the company's shares increased little by little, but someone posted on Twitter a video of Jeff Bezos himself, in Leading office in a major New York hotel receiving a dose of heroin and speaking in such a way that he appears to be unconscious of what he's doing.

The video will spread like wildfire, but that, in turn, will push a good number of investors to withdraw their shares from Bezos. Within about 24 hours, Bezos will be able to reach the public with the truth, which is that this video is completely fabricated with "deep counterfeiting", but he will have lost about Ten billion dollars of his fortune in one night.

Jeff Bezos

For Bezos, what happened was just a shake in his fortune, but what if it happened to a novice investor or in the markets of a developing country?

Could this level of counterfeiting not be used to control the auto trade market, for example, or smartphones, during very crucial moments?

Couldn't that change the election results, a few hours before the elections start?

We know that a video of less than a minute can generate billions of views in just a few hours!

The matter, then, is not just a matter of being able to uncover the falsehood of these technologies or not, which is a question that we will answer shortly, but it is about that you will not be able to find the time to deal with losses until the falsehood is revealed.

On the other hand, the above necessarily leads us to a deeper and more important question: Can these technologies be used by extension for extortion operations?

In fact, this is very possible, but when we talk about blackmail, "business" or "politics" will not be our point of interest in the first place, but pornographic films!

According to a report issued by (2) the "Deeptrace" Foundation for Information Security, the number of videos that rely on deepfakes technology doubled on the Internet between 2018-2019, but the most important feature of this report was that 96% of the videos were released. With this technology, it was pornographic, where the faces of (3) actresses (mainly American women) were placed on the faces of the heroines of pornographic films, and then these videos were widespread among porn users as soon as they were released.

In fact, a group of forums have already emerged that you can enter to ask to make your own video, either for yourself or for one of your acquaintances.These forums set general rules that include not to fake any video of an ordinary person in pornographic content, but they allow pornography to fake celebrities, this is of course behavior Very strange, as if a famous artist will not be affected by her presence in a porn video, but the idea is that anyone can communicate with any of these programmers in particular and request a special video to blackmail one of them, we have not yet detected that, but we have evidence that it is very likely that it will happen.

Take, for example, the report (4) issued by the "Senti" platform for e-business intelligence that monitored one of the channels on the "Telegram" application, which included an Internet "bot" whose function is to use artificial intelligence technology to remove clothes from pictures of women that people send to it. The application does not remove The clothes, of course, but falsifies the images by completing the appropriate body from his database, as a result of which 100 thousand images monitored by the report came out of this electronic bot.

As usual, the bot designers indicated that what happened was for entertainment purpose, but is it really so?

This type of technology will continue to progress and differentiate, now we know that it is not advanced enough for everyone to use it, and that there are few experts who are able to produce videos with a technology that cannot be easily separated from the truth, and this requires a lot of time and effort, while amateurs use techniques that can easily be Distinguish that they are fabricated, but the most important question now is about the future of these technologies, can they evolve to the point where they are no different from the truth?

Will it soon be impossible to know if the video or audio clip is original with the naked eye or the ear ?!

When deepfaking techniques began to appear, it became possible to easily detect the fabrication through several parameters, such as the contours of the face, for example, because you usually install a face with a length and width different from the one that would be placed on it, that left some "stretch" effect on the faces, which is something It became possible to detect it with respect to the naked eye, but experts of deep counterfeiting were able to avoid this distortion, so that it became impossible to detect with the naked eye, after that the companies working to find ways to monitor the fabrication went to eye movements (blinking specifically), and then within months only they were able Deepfakes experts blur that problem.

"But they also offer a warning: Deepfake technology that has emerged in recent years continues to evolve and improve. And while deepfake videos have not yet been effectively used in many misinformation campaigns, the danger is growing."

1 / https: //t.co/B8TUQtGGVn

- Timnit Gebru (@timnitGebru) March 9, 2021

Now the two teams are fighting over an analysis of facial psychology, meaning that your face is supposed to move in a way that matches the speech coming out of your mouth, but even at that point deepfakes experts are on their way to concealing this kind of error.

In a research paper (5) released in February 2021 from the University of California San Diego, it was stated that current deepfakes techniques could defeat available counterfeiting detection techniques.

The matter then, perhaps, is already getting out of hand.

These false technologies of reality depend on machine learning, it is like being the owner of a candy factory, when your first production appears, the marketing team goes to the audience and takes their impressions, collects these impressions and a team of analysts sits to study and use them in the second batch of production, so it is better than the first, then The team itself comes out to analyze audience opinions, and so on.

In the case of artificial intelligence, it does almost the same thing, errors are placed in the database and work to improve it every additional time, so this conflict between the fake and the fake detector - wherever it arises - will continue without stopping, and it has been since the first moment we entered the digital age .

Team (6) from the University of Virginia, USA, says that besides the technical laboratory in this problem, there is a human factor related to people's belief in this new type of fabrication. We know that the lower the experience with the Internet in general and digital imaging and counterfeiting techniques in particular, the less suspicion the person evaluating Information, we also know that people's belief in fake news is based mainly on their prejudices regarding their previous convictions. If you are inclined towards a political trend, there is a possibility that you believe fake news against the counter-current.

This kind of bias can be dealt with, and this team suggests that high schools and colleges should incorporate cognitive psychology and critical thinking tools into their mandatory curriculum, so it's not about teaching people about deepfakes specifically, which is necessary of course, but it extends to an urgent need. To develop the thinking mechanisms of society as a whole, so that more people are able to perceive their prejudices and examine everything that is presented on the Internet with the eyes of the rational critic, from that point of view, we cannot expect technology alone to address the problem.

When will this happen?

Can we really get to that point?

The answers to these questions are unfortunately "we do not know", but we know that governments around the world are racing to provide tremendous material support to deepfakes experts in order to use their intelligence technologies, but they do little in the direction of contemporary digital literacy!

—————————————————————————————-

Sources

  • Video Rewrite: Driving Visual Speech with Audio

  • Deepfake videos are a far, far bigger problem for women

  •  Is it legal to swap someone's face into porn without consent?

  • Fake naked photos of thousands of women shared online

  • Deepfake detectors can be defeated, computer scientists show for the first time

  • Why Are Deepfakes So Effective?