Image to Video Generation

16 papers with code • 0 benchmarks • 0 datasets

Image to Video Generation refers to the task of generating a sequence of video frames based on a single still image or a set of still images. The goal is to produce a video that is coherent and consistent in terms of appearance, motion, and style, while also being temporally consistent, meaning that the generated video should look like a coherent sequence of frames that are temporally ordered. This task is typically tackled using deep generative models, such as Generative Adversarial Networks (GANs) or Variational Autoencoders (VAEs), that are trained on large datasets of videos. The models learn to generate plausible video frames that are conditioned on the input image, as well as on any other auxiliary information, such as a sound or text track.

Most implemented papers

AnimateAnything: Fine-Grained Open Domain Image Animation with Motion Guidance

alibaba/animate-anything 21 Nov 2023

Image animation is a key task in computer vision which aims to generate dynamic visual content from static image.

VideoAssembler: Identity-Consistent Video Generation with Reference Entities using Diffusion Model

gulucaptain/videoassembler 29 Nov 2023

Identity-consistent video generation seeks to synthesize videos that are guided by both textual prompts and reference images of entities.

ConsistI2V: Enhancing Visual Consistency for Image-to-Video Generation

TIGER-AI-Lab/ConsistI2V 6 Feb 2024

To verify the effectiveness of our method, we propose I2V-Bench, a comprehensive evaluation benchmark for I2V generation.

Mora: Enabling Generalist Video Generation via A Multi-Agent Framework

lichao-sun/mora 20 Mar 2024

Sora is the first large-scale generalist video generation model that garnered significant attention across society.

Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance

fudan-generative-vision/champ 21 Mar 2024

In this study, we introduce a methodology for human image animation by leveraging a 3D human parametric model within a latent diffusion framework to enhance shape alignment and motion guidance in curernt human generative techniques.

TI2V-Zero: Zero-Shot Image Conditioning for Text-to-Video Diffusion Models

nihaomiao/cvpr23_lfdm 25 Apr 2024

To guide video generation with the additional image input, we propose a "repeat-and-slide" strategy that modulates the reverse denoising process, allowing the frozen diffusion model to synthesize a video frame-by-frame starting from the provided image.