New AI Technology To Help Make Lip Sync Dubbing More Accurate

Researchers have recently developed an Artificial Intelligence (AI)-based system that can help edit the facial expressions of actors in order to match dubbed voices and with this technology lip-sync dubbing may become a thing of past.

The system has been named as Deep Video Portraits which can be used to correct gaze and head pose during video conferencing and provides latest possibilities for post-production and visual effects to a video as per the research presented at the SIGGRAPH 2018 conference in Vancouver, Canada.

“This technique could also be used for post-production in the film industry where computer graphics editing of faces is already widely used in today’s feature films,” said study co-author Christian Richardt from the University of Bath in Britain.

According to the researchers, the new system could provide solutions to the industry to help save time and cut costs for post productions.

The present technology is different from the previous one that is mainly focused to correct movements of the face interior only. The technology, Deep Video Portraits can also help animate the whole face which will include eyes, eyebrows and head position in videos, by utilizing tools and controls known from computer graphics face animation.

Even if the head is moved around, the technology can synthesize a plausible static video background.

“It works by using model-based 3D face performance capture to record the detailed movements of the eyebrows, mouth, nose, and head position of the dubbing actor in a video,” said one of the researchers Hyeongwoo Kim from the Max Planck Institute for Informatics in Germany.

“It then transposes these movements onto the ‘target’ actor in the film to accurately sync the lips and facial movements with the new audio,” Hyeongwoo added.

As of now, the research is in the stage of proof-of-concept and it will take time for it to work in reality. But the researchers working on the technology anticipate that the approach can really make difference to the visual industry.

“Despite extensive post-production manipulation, dubbing films into foreign languages always presents a mismatch between the actor on screen and the dubbed voice,” Professor Christian Theobalt from the Max Planck Institute for Informatics said.

“Our new Deep Video Portrait approach enables us to modify the appearance of a target actor by transferring head pose, facial expressions, and eye motion with a high level of realism,” Theobalt added.

You May Also Read: Samsung Galaxy A6+ Offers Price Cut Again In India


FacebookTwitterInstagramPinterestLinkedInGoogle+YoutubeRedditDribbbleBehanceGithubCodePenEmailWhatsappEmail
×
facebook
Hit “Like” to follow us and receive latest news