Video Generation

229 papers with code • 15 benchmarks • 14 datasets

( Various Video Generation Tasks. Gif credit: MaGViT )

Libraries

Use these libraries to find Video Generation models and implementations

Latest papers with no code

TC4D: Trajectory-Conditioned Text-to-4D Generation

no code yet • 26 Mar 2024

We learn local deformations that conform to the global trajectory using supervision from a text-to-video model.

Annotated Biomedical Video Generation using Denoising Diffusion Probabilistic Models and Flow Fields

no code yet • 26 Mar 2024

It is composed of a denoising diffusion probabilistic model (DDPM) generating high-fidelity synthetic cell microscopy images and a flow prediction model (FPM) predicting the non-rigid transformation between consecutive video frames.

Tutorial on Diffusion Models for Imaging and Vision

no code yet • 26 Mar 2024

The goal of this tutorial is to discuss the essential ideas underlying the diffusion models.

A Survey on Long Video Generation: Challenges, Methods, and Prospects

no code yet • 25 Mar 2024

Video generation is a rapidly advancing research area, garnering significant attention due to its broad range of applications.

TRIP: Temporal Residual Learning with Image Noise Prior for Image-to-Video Diffusion Models

no code yet • 25 Mar 2024

Next, TRIP executes a residual-like dual-path scheme for noise prediction: 1) a shortcut path that directly takes image noise prior as the reference noise of each frame to amplify the alignment between the first frame and subsequent frames; 2) a residual path that employs 3D-UNet over noised video and static image latent codes to enable inter-frame relational reasoning, thereby easing the learning of the residual noise for each frame.

Opportunities and challenges in the application of large artificial intelligence models in radiology

no code yet • 24 Mar 2024

Influenced by ChatGPT, artificial intelligence (AI) large models have witnessed a global upsurge in large model research and development.

Spectral Motion Alignment for Video Motion Transfer using Diffusion Models

no code yet • 22 Mar 2024

The evolution of diffusion models has greatly impacted video generation and understanding.

AnyV2V: A Plug-and-Play Framework For Any Video-to-Video Editing Tasks

no code yet • 21 Mar 2024

In the second stage, AnyV2V can plug in any existing image-to-video models to perform DDIM inversion and intermediate feature injection to maintain the appearance and motion consistency with the source video.

Explorative Inbetweening of Time and Space

no code yet • 21 Mar 2024

We introduce bounded generation as a generalized task to control video generation to synthesize arbitrary camera and subject motion based only on a given start and end frame.

Enabling Visual Composition and Animation in Unsupervised Video Generation

no code yet • 21 Mar 2024

We call our model CAGE for visual Composition and Animation for video GEneration.