Actors Tom Cruise, Robert Downey Jr., Ewan McGregor, Jeff Goldblum and Star Wars creator George Lucas are sitting, apparently drunk, at the table with a journalist from the website Collider . But if you look a little better, you suddenly notice that Goldblum's face is too large in relation to his body. The video is fake.
This video is a deepfake, a recording of a number of people with the faces of someone else digitally glued on. The technology has been on the rise since 2017. Fourteen thousand deepfakes are on video sites this year, the Dutch cyber security company Deeptrace estimated: twice as many as six months previously.
A major problem for society, the media headlines. Soon no one will know for sure whether a video of a politician who makes harsh statements is real or not.
The future risks for far-reaching disinformation campaigns in which deepfakes are used are large, experts warn. Deepfake makers already successfully copied the American presidents Barack Obama and Donald Trump. The technology seems to make a major leap forward every few months.
Especially porn, less political
For the time being that is mainly for porn, concluded Deeptrace. 96 percent of all deepfakes are porn, with the face of a famous actress or singer placed on pornographic material. Incidentally only for women - Deeptrace did not discover any deep-fake porn with male actors.
Deepfakes itself have not caused any real political problems so far. But the existence of technology is already undermining confidence in the image.
Take the African country of Gabon, for example, where great confusion arose when a strange video clip of President Ali Bongo circulated around New Year. Bongo had been absent without explanation for a while - now he looked strange in the video, barely blinked, and remained seated.
Researchers found no evidence that it was actually a deep fake. However, the Gabonese army thought of a cover-up affair. A failed coup attempt followed.
46Nixon talks about failed moon landing in a convincing deepfake
Deepfakes keep getting better
Improved technology for making deepfakes is being worked on both in separate software laboratories and on public programming forums. That goes at a rapid pace.
In May of this year, the Samsung artificial intelligence lab succeeded in making a deep fake based on just one photo. Samsung was able to bring Albert Einstein and the Mona Lisa to life.
There are many methods to make a deepfake, but they all work by training an artificial intelligence on image. Usually large databases of photos of the same person in different poses are used. The software generates a digital face from this, which can be projected onto a video.
But a company like Samsung develops algorithms that require less information. The Samsung intelligence was first taught how human faces move. After that, it only needed the main features of a face - the shape of the eyes and nose, for example - to make a convincing deepfake.
Until now, deepfakes can still be reasonably distinguished from real. The algorithms are not perfect, and you often see differences in terms of color, shape or light - as in the video by Collider . Collider used a 'face swap', a kind of projection from another face to an existing face, but that projection stops at the edges of the face: you often see problems there.
Will there come a time when the human eye can no longer recognize a deepfake? Experts are concerned about that. That's why many companies and scientists are already working on algorithms that can recognize deepfakes based on small pixel changes.
But porn sites such as Pornhub and social media such as Twitter also compete against the deepfake. Porn movies in which someone else's face is glued to the actress may no longer be placed on those sites. The reason: the owner of that face did not agree to the video. And that is not allowed.
Yet it seems like a race against the clock. More and more, software that is becoming easier to use to make deepfakes is appearing online. Also for fun: Snapchat is experimenting with a special Cameo function, which makes a deepfake animation of your face.
How long does it take before deepfakes really become a problem? According to Hao Li, a scientist who was one of the first to make a credible deep fake, the moment is coming quickly. He appeared in the US Deep Lunch program in September with a clear message: consumers can expect realistic deepfakes "within a year".