Image Animation
27 papers with code • 0 benchmarks • 0 datasets
Image Animation is a field for image-animation of a source image by a driving video
Benchmarks
These leaderboards are used to track progress in Image Animation
Most implemented papers
Latent Image Animator: Learning to animate image via latent space navigation
Deviating from such models, we here introduce Latent Image Animator (LIA), a self-supervised auto-encoder that evades need for structure representation.
Neural Fields in Visual Computing and Beyond
Recent advances in machine learning have created increasing interest in solving visual computing problems using a class of coordinate-based neural networks that parametrize physical properties of scenes or objects across space and time.
Move As You Like: Image Animation in E-Commerce Scenario
Creative image animations are attractive in e-commerce applications, where motion transfer is one of the import ways to generate animations from static images.
Image Animation with Keypoint Mask
Motion transfer is the task of synthesizing future video frames of a single source image according to the motion from a given driving video.
Thin-Plate Spline Motion Model for Image Animation
Firstly, we propose thin-plate spline motion estimation to produce a more flexible optical flow, which warps the feature maps of the source image to the feature domain of the driving image.
Single Stage Virtual Try-on via Deformable Attention Flows
Virtual try-on aims to generate a photo-realistic fitting result given an in-shop garment and a reference person image.
Motion Transformer for Unsupervised Image Animation
Image animation aims to animate a source image by using motion learned from a driving video.
SadTalker: Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation
We present SadTalker, which generates 3D motion coefficients (head pose, expression) of the 3DMM from audio and implicitly modulates a novel 3D-aware face render for talking head generation.
Text-Guided Synthesis of Eulerian Cinemagraphs
We introduce Text2Cinemagraph, a fully automated method for creating cinemagraphs from text descriptions - an especially challenging task when prompts feature imaginary elements and artistic styles, given the complexity of interpreting the semantics and motions of these images.
LAMP: Learn A Motion Pattern for Few-Shot-Based Video Generation
Specifically, we design a first-frame-conditioned pipeline that uses an off-the-shelf text-to-image model for content generation so that our tuned video diffusion model mainly focuses on motion learning.