Learning Identity-Invariant Motion Representations for Cross-ID Face Reenactment

Human face reenactment aims at transferring motion patterns from one face (from a source-domain video) to an-other (in the target domain with the identity of interest).While recent works report impressive results, they are notable to handle multiple identities in a unified model. In this paper, we propose a unique network of CrossID-GAN to perform multi-ID face reenactment. Given a source-domain video with extracted facial landmarks and a target-domain image, our CrossID-GAN learns the identity-invariant motion patterns via the extracted landmarks and such information to produce the videos whose ID matches that of the target domain. Both supervised and unsupervised settings are proposed to train and guide our model during training.Our qualitative/quantitative results confirm the robustness and effectiveness of our model, with ablation studies confirming our network design.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here