Although research work on “deepfake” technology began as early as 1997, when (1) a team led by Christopher Bolger, who was then a scientist at Interval Research, published a paper on how to modify human lip movements in videos so that It seems that this person is saying something different from what he is really saying, as he did not draw the attention of the public around the world until 2018, on the seventeenth of April specifically, when the “BuzzFeed” platform published a video of Barack Obama speaking in an unusual way and attacking Donald Trump.

Then it turned out to be the American artist Jordan Peel, who plays - vocally - the role of Barack Obama, while the deep fake technique works to adjust Obama's lips to agree with what Bell says, the video drew the attention of tens of millions around the world, to the extent that some believed it, Although Bill - in the words of Obama - said: "This is a dangerous time, we must be vigilant about what we get from the Internet."

You won't believe what Obama says in this video 😉 pic.twitter.com/n2KloCdF2G

— BuzzFeed (@BuzzFeed) April 17, 2018

In 2019, another popular video of a famous American actor named “Bill Header” imitating Tom Cruise, while the founder of the “Ctrl Shift Face” channel on YouTube worked on superimposing Tom Cruise’s entire face on the artist’s face while he spoke, in the meantime, less accurate versions of The technology is available to the general public, which opened the door to tons of “deepfake” videos, but a video that appeared in early March 2021 by Chris Ohm, the American visual effects artist, imitated Tom Cruise with deepfake technology with unprecedented accuracy that deceived millions on the platform "Tik Tok", caused a new wave of controversy about this technology.

The idea of ​​deep fake - with a degree of extreme simplification that does not fulfill its right, of course - is the same as the old idea of ​​​​using photoshop for counterfeiting, someone wants to improve his image on the Internet, so he transfers his face to the body of a bodybuilder, then adjusts the image and uploads it to his account on one of the platforms, but In the case of deep fake, the artificial intelligence will intervene to use advanced facial recognition techniques and direct it to another domain, which is the installation of your facial features, which it will obtain from analyzing a large amount of your photos and videos, on the face of another person in an animated video!

It is the natural development of falsifying images, but videos convince people by a huge difference compared to images, in addition to that, we have learned - over a large period of time that reached more than three decades - that images can fake you, this kind of knowledge we do not obtain between day and night, but must That it is a general situation that permeates societies little by little, but the pictures are simple, and the videos are almost everything on the Internet, what is worse is that the public is still unaware of the extent of the impact of technologies like this, do you imagine, for example, that it stops at the limit of falsifying a video of Trump or Obama Or Mark Zuckerberg?

Imagine with us the following scenario, in one of the countries there is a degree of turmoil due to the incident of a police officer killing a citizen selling potatoes in the street for a trivial reason. That is, at a crucial moment, the country's prime minister comes out to demand an end to the violence, or else "the state will use its iron hand as it did with the potato seller."

In normal times, it may take several hours or half a day to verify the authenticity of this video and the government announces that it was fabricated by “deep fake” technology and things calm down, but in time-critical cases, one minute can be the difference between a revolution in a country calming down or That it continues to burn non-stop, in the previous case, when the government announces that this video is fabricated and confirms that within four to five hours, for example, the level of violence will have escalated to the point of no return.

Let's now imagine another scenario, Jeff Bezos's "Blue Origins" company succeeded for the first time in launching a new rocket that consumes a very low amount of fuel, during the day following this launch, the company's shares rose little by little, but someone posted on Twitter a video of Jeff Bezos himself, in A prestigious office in a major New York hotel, he takes a dose of heroin and talks in such a way that he seems unaware of what he's doing.

The video will spread like wildfire, but this will prompt a good number of investors to withdraw their shares from the Bezos company. Within about 24 hours, Bezos will be able to reach the audience in fact, which is that this video is completely fabricated with “deep fake” technology, but he will have lost about Ten billion dollars of his fortune in one night.

Jeff Bezos

For Bezos, what happened is just a fluctuation in his wealth, but what if this happened to a novice investor or in the markets of a developing country?

Could this level of counterfeiting be used to control the car market, for example, or smartphones, during very crucial moments?

Could this not change the results of the elections, a few hours before the elections begin?

We know that a video of less than a minute can generate billions of views in just a few hours!

So it is not only about whether we can debunk these technologies or not, which is a question we will answer shortly, but it is that you will not be able to find enough time to deal with losses until the deception is revealed.

On the other hand, the foregoing necessarily leads us to a more profound and important question: Can these technologies not be used for extortion operations?

In fact, this is very possible, but when we talk about blackmail, "business" or "politics" will not be our point of interest in the first place, but porn movies!

According to a report issued by (2) the Foundation “Deeptrace” for information security, the number of videos that rely on deep fake technology has doubled on the Internet between 2018-2019, but the most important feature of this report was that 96% of the videos that were issued With this technique, it was pornographic, as the faces of (3) actresses (mainly American) were placed on the faces of the heroines of pornographic films, and then those videos were widely spread among users of pornographic sites as soon as they were released.

In fact, a group of forums has already sprung up that you can enter to request a video of yourself, whether for yourself or one of your acquaintances. These forums set general rules that include that no video of an ordinary person should be faked into pornographic content, but they allow the pornography of celebrities, this is of course behavior Very strange, as if a famous artist would not be affected by her presence in a porn video, but the idea is that anyone can communicate with any of these programmers privately and request a private video to blackmail one of them, we have not detected this yet, but we have evidence that it is very likely to happen.

Take, for example, the report (4) issued by the intelligence platform "Sensiti", which monitored one of the channels on the "Telegram" application, which included an Internet "bot" whose function is to use artificial intelligence technology to remove clothes from the pictures of women that people send to it, the application does not remove Clothes, of course, but he fakes pictures by completing the appropriate body from his database, as a result of which 100,000 pictures monitored by the report came out of this electronic bot.

As usual, the bot designers pointed out that what happened was just for entertainment, but is it really that? This type of technology will continue to advance and differentiate, now we know that it is not advanced enough for everyone to use it, and that there are a few experts who are able to produce videos with technology that cannot easily be separated from the truth, and this requires a lot of time and effort, while amateurs use techniques that can easily Distinguishing that they are fabricated, but the most important question now is about the future of these technologies, can they develop to the point where they do not differ from the truth? Will it soon be impossible to tell if the video or audio is original with the naked eye or by ear?!

When deep-fake techniques began to appear, it became possible to easily detect the fabrication through several criteria, such as the topography of the face, for example, because you usually install a face with a different length and width than the one that will be placed on it, and this left some “stretch” effect on the faces, which is something It became possible to detect it with the naked eye, but the experts of deep fakes managed to avoid that distortion so that it became impossible to detect it with the naked eye. Deep counterfeiting experts from obliterating that problem.

"But they also offer a warning: Deepfake technology that has emerged in recent years continues to evolve and improve. And while deepfake videos have not yet been effectively used in many misinformation campaigns, the danger is growing."

1/https://t.co/B8TUQtGGVn

— Timnit Gebru (@timnitGebru) March 9, 2021

Now the two teams are fighting over facial psychology, meaning that your face is supposed to move in a way that matches the speech out of your mouth, but even at that point deepfakes are on their way to masking this kind of error. In a research paper(5) released in February 2021 from the University of California, San Diego, it was stated that current deepfake techniques can defeat the available counterfeit detection techniques. The matter, then, may have already gotten out of hand.

These pseudo-reality techniques depend on machine learning, it's like being the owner of a candy factory, when your first production appears, the marketing team goes to the audience and takes their impressions, these impressions are collected and a team of analysts sits down to study and use them in the second batch of production, so it is better than the first, then The team goes out to analyze audience opinions, and so on. In the case of artificial intelligence, it does almost the same thing, errors are placed in the database and it works to improve it every additional time, so this struggle between the fake and the detector - wherever it arises - will continue without stopping, and it is from the first moment we entered the digital age .

A team (6) from the American University of Virginia says that in addition to the technical labs in this problem, there is a human factor related to people’s belief in this new type of fabrication, we know that the lower the experience with the Internet in general and the techniques of photography and digital forgery in particular, the lower the suspicion of the person who evaluates Information, we also know that people's belief in fake news is based mainly on their prejudices towards their previous convictions, so if you tend towards a political trend, there is a possibility that you will believe fake news against the countercurrent.

This kind of bias can be dealt with, and this team suggests that high schools and colleges should incorporate cognitive psychology and critical thinking tools into their mandatory curricula, so it's not about teaching people about deepfakes specifically, which is necessary, of course, but extends to a much-needed To develop the thinking mechanisms of society as a whole, so that more people can recognize their own biases and examine everything that is presented online with the eyes of a rational critic, from that point of view we cannot wait for technology alone to solve the problem.

When will that happen?

Can we really get to that point?

The answers to these questions are unfortunately "we don't know", but we do know that governments around the world are racing to provide massive financial support to deepfake experts to use their techniques for intelligence, but they are doing very little towards contemporary digital literacy!

———————————————————————————————-

Sources

  • Video Rewrite: Driving Visual Speech with Audio

  • Deepfake videos are a far, far bigger problem for women

  •  Is it legal to swap someone's face into porn without consent?

  • Fake naked photos of thousands of women shared online

  • Deepfake detectors can be defeated, computer scientists show for the first time

  • Why Are Deepfakes So Effective?