![]() |
NVIDIA researchers introduce an AI system that generates a realistic talking-head video of a person using one source image and a driving video. The source image encodes an individual’s appearance, and the driving video directs motions in the resulting video. The researchers have proposed a pure neural rendering approach in which a talking-head video is rendered using a deep network in a one-shot setting without using a 3D human head’s graphics model. When compared to 3D graphics-based models, 2D based methods have various advantages such as below:
Paper: https://arxiv.org/pdf/2011.15126.pdf Github: https://nvlabs.github.io/face-vid2vid/ submitted by /u/ai-lover |
