Latent Image Animator: Learning to animate image via latent space navigation

Animating images has become increasingly realistic, as well as efficient due to the remarkable progress of Generative Adversarial Networks (GANs) and auto-encoder. Current animation-approaches commonly exploit structure representation extracted from driving videos. Such structure representation (e.g., keypoints or regions) is instrumental in transferring motion from driving videos to still images. However, such approaches fail in case that a source image and driving video encompass large appearance variation. In addition, the extraction of structure information requires additional modules that endow the animation-model with increased complexity. Deviating from such models, we here introduce Latent Image Animator (LIA), a self-supervised auto-encoder that evades need for structure representation. LIA is streamlined to animate images by linear navigation in the latent space. Specifically, motion in generated video is constructed by linear displacement of codes in the latent space. To do so, we learn a set of orthogonal motion directions simultaneously, and use their linear combination to represent any displacement in the latent space. Extensive quantitative and qualitative analysis suggests that our model systematically and significantly outperforms state-of-art methods on VoxCeleb, Taichi and TED-talk datasets w.r.t. generated quality.

PDF Abstract

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.


No methods listed for this paper. Add relevant methods here