Unconditional Video Generation

9 papers with code • 1 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?

Datasets


Most implemented papers

Video Diffusion Models

lucidrains/make-a-video-pytorch 7 Apr 2022

Generating temporally coherent high fidelity video is an important milestone in generative modeling research.

MOSO: Decomposing MOtion, Scene and Object for Video Prediction

iva-mzsun/moso CVPR 2023

Experimental results demonstrate that our method achieves new state-of-the-art performance on five challenging benchmarks for video prediction and unconditional video generation: BAIR, RoboNet, KTH, KITTI and UCF101.

Latent Neural Differential Equations for Video Generation

Zasder3/Latent-Neural-Differential-Equations-for-Video-Generation 7 Nov 2020

Generative Adversarial Networks have recently shown promise for video generation, building off of the success of image generation while also addressing a new challenge: time.

CelebV-HQ: A Large-Scale Video Facial Attributes Dataset

celebv-hq/celebv-hq 25 Jul 2022

Large-scale datasets have played indispensable roles in the recent success of face generation/editing and significantly facilitated the advances of emerging research fields.

MotionVideoGAN: A Novel Video Generator Based on the Motion Space Learned from Image Pairs

bbzhu-jy16/motionvideogan 6 Mar 2023

We present MotionVideoGAN, a novel video generator synthesizing videos based on the motion space learned by pre-trained image pair generators.

Video Diffusion Models with Local-Global Context Guidance

exisas/lgc-vd 5 Jun 2023

We construct a local-global context guidance strategy to capture the multi-perceptual embedding of the past fragment to boost the consistency of future prediction.

DDLP: Unsupervised Object-Centric Video Prediction with Deep Dynamic Latent Particles

taldatech/ddlp 9 Jun 2023

We propose a new object-centric video prediction algorithm based on the deep latent particle (DLP) representation.

StyleInV: A Temporal Style Modulated Inversion Network for Unconditional Video Generation

johannwyh/styleinv ICCV 2023

In this paper, we introduce a novel motion generator design that uses a learning-based inversion network for GAN.

StyleCineGAN: Landscape Cinemagraph Generation using a Pre-trained StyleGAN

jeolpyeoni/StyleCineGAN 21 Mar 2024

We propose a method that can generate cinemagraphs automatically from a still landscape image using a pre-trained StyleGAN.