So far, it was only possible to animate a picture by using a large set of images from the same person to model the appearance. While animating a static face image with target facial expressions and movements is important in the area of image editing and movie production, it is challenging at the same time due to the complex geometry and movement of human faces.
This is a great paper bringing one of the most innovative solutions in face reenactment. It presents, FaR-GAN, a one-shot face reenactment model that takes only one face image of any given source identity and a target expression as input, and then produces a face image of the same source identity but with the target expression.
The FaR-GAN makes no assumptions about the source identity, facial expression, head pose, or even image background. The evaluation of the results is done on the VoxCeleb1 dataset. The paper finally concludes that FaR-GAN is able to generate a higher quality face image than the compared methods.