Image Animation

45 papers with code • 0 benchmarks • 0 datasets

Image Animation is a field for image-animation of a source image by a driving video

Most implemented papers

AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning

guoyww/animatediff 10 Jul 2023

Once trained, the motion module can be inserted into a personalized T2I model to form a personalized animation generator.

First Order Motion Model for Image Animation

AliaksandrSiarohin/first-order-model NeurIPS 2019

To achieve this, we decouple appearance and motion information using a self-supervised formulation.

MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model

magic-research/magic-animate CVPR 2024

Existing animation works typically employ the frame-warping technique to animate the reference image towards the target motion.

Image Animation with Perturbed Masks

itsyoavshalev/Image-Animation-with-Perturbed-Masks CVPR 2022

We present a novel approach for image-animation of a source image by a driving video, both depicting the same type of object.

SadTalker: Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation

winfredy/sadtalker CVPR 2023

We present SadTalker, which generates 3D motion coefficients (head pose, expression) of the 3DMM from audio and implicitly modulates a novel 3D-aware face render for talking head generation.

DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors

Doubiiu/DynamiCrafter 18 Oct 2023

Animating a still image offers an engaging visual experience.

Moonshot: Towards Controllable Video Generation and Editing with Multimodal Conditions

salesforce/lavis 3 Jan 2024

This work presents Moonshot, a new video generation model that conditions simultaneously on multimodal inputs of image and text.

UniAnimate: Taming Unified Video Diffusion Models for Consistent Human Image Animation

ali-vilab/unianimate-dit 3 Jun 2024

First, to reduce the optimization difficulty and ensure temporal coherence, we map the reference image along with the posture guidance and noise video into a common feature space by incorporating a unified video diffusion model.

Animating Arbitrary Objects via Deep Motion Transfer

AliaksandrSiarohin/monkey-net CVPR 2019

This is achieved through a deep architecture that decouples appearance and motion information.

Creative Flow+ Dataset

creativefloworg/creativeflow CVPR 2019

We present the Creative Flow+ Dataset, the first diverse multi-style artistic video dataset richly labeled with per-pixel optical flow, occlusions, correspondences, segmentation labels, normals, and depth.