New techniques could revolutionize the world of cinema and more particularly the dubbing world. A program makes it possible to flatten the lips of the doubler on the face of the actor or the actress to obtain a perfect synchronization between the two. A process made cheaper through artificial intelligence, and usable from conventional computers.

While some never watch their films or series in original, others do not support dubbing. In question, the bad translations or the sometimes too obvious shift between the mouth and the voice. But a new technology could change everything: it can flatten the movement of the lips of the doubler on the face of the actor or the actress who shot the film. The timing would be perfect: we could even read on the lips.

>> Find the morning of the day of Bernard Poirette in replay and podcast here

This technology will not apply on filming but at the time of dubbing. Everything is automatic: a camera films the face of the doubler. Then a program analyzes its expressions and will find their equivalent in the actor among all the videos he has already shot. And then he will copy and paste on the actor's face.

A cheaper process

With artificial intelligence, this process is cheaper and accessible for commercial computers. To go even further: from now on, it is also possible to replicate the movements of the head, eyebrows and mouth. If an actor is not credible in a scene, it will be possible to transform it into a virtual puppet, and replace its expressions by those of a stranger who would play a little better.

But this technology is not without risk. She inspired the "deepfake": when we plate the face of a celebrity on another person to create false testimony or false accusations. Garre, therefore, handling videos.