Image Animation

27 papers with code • 0 benchmarks • 0 datasets

Image Animation is a field for image-animation of a source image by a driving video

Most implemented papers

Latent Image Animator: Learning to animate image via latent space navigation

wyhsirius/LIA ICLR 2022

Deviating from such models, we here introduce Latent Image Animator (LIA), a self-supervised auto-encoder that evades need for structure representation.

Neural Fields in Visual Computing and Beyond

nvidiagameworks/kaolin-wisp 22 Nov 2021

Recent advances in machine learning have created increasing interest in solving visual computing problems using a class of coordinate-based neural networks that parametrize physical properties of scenes or objects across space and time.

Move As You Like: Image Animation in E-Commerce Scenario

jialetao/dam 19 Dec 2021

Creative image animations are attractive in e-commerce applications, where motion transfer is one of the import ways to generate animations from static images.

Image Animation with Keypoint Mask

or-toledano/animation-with-keypoint-mask 20 Dec 2021

Motion transfer is the task of synthesizing future video frames of a single source image according to the motion from a given driving video.

Thin-Plate Spline Motion Model for Image Animation

yoyo-nb/thin-plate-spline-motion-model CVPR 2022

Firstly, we propose thin-plate spline motion estimation to produce a more flexible optical flow, which warps the feature maps of the source image to the feature domain of the driving image.

Single Stage Virtual Try-on via Deformable Attention Flows

OFA-Sys/DAFlow 19 Jul 2022

Virtual try-on aims to generate a photo-realistic fitting result given an in-shop garment and a reference person image.

Motion Transformer for Unsupervised Image Animation

jialetao/motrans 28 Sep 2022

Image animation aims to animate a source image by using motion learned from a driving video.

SadTalker: Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation

winfredy/sadtalker CVPR 2023

We present SadTalker, which generates 3D motion coefficients (head pose, expression) of the 3DMM from audio and implicitly modulates a novel 3D-aware face render for talking head generation.

Text-Guided Synthesis of Eulerian Cinemagraphs

text2cinemagraph/text2cinemagraph 6 Jul 2023

We introduce Text2Cinemagraph, a fully automated method for creating cinemagraphs from text descriptions - an especially challenging task when prompts feature imaginary elements and artistic styles, given the complexity of interpreting the semantics and motions of these images.

LAMP: Learn A Motion Pattern for Few-Shot-Based Video Generation

RQ-Wu/LAMP 16 Oct 2023

Specifically, we design a first-frame-conditioned pipeline that uses an off-the-shelf text-to-image model for content generation so that our tuned video diffusion model mainly focuses on motion learning.