Video Generation

100 papers with code • 10 benchmarks • 10 datasets

( Image credit: Logacheva et al. )

Most implemented papers

GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium

bioinf-jku/TTUR NeurIPS 2017

Generative Adversarial Networks (GANs) excel at creating realistic images with complex models for which maximum likelihood is infeasible.

Everybody Dance Now

carolineec/EverybodyDanceNow ICCV 2019

This paper presents a simple method for "do as I do" motion transfer: given a source video of a person dancing, we can transfer that performance to a novel (amateur) target after only a few minutes of the target subject performing standard moves.

Learning Temporal Coherence via Self-Supervision for GAN-based Video Generation

thunil/TecoGAN 23 Nov 2018

Additionally, we propose a first set of metrics to quantitatively evaluate the accuracy as well as the perceptual quality of the temporal evolution.

MoCoGAN: Decomposing Motion and Content for Video Generation

sergeytulyakov/mocogan CVPR 2018

The proposed framework generates a video by mapping a sequence of random vectors to a sequence of video frames.

Stochastic Adversarial Video Prediction

alexlee-gk/video_prediction ICLR 2019

However, learning to predict raw future observations, such as frames in a video, is exceedingly challenging -- the ambiguous nature of the problem can cause a naively designed model to average together possible futures into a single, blurry prediction.

Unsupervised Learning for Physical Interaction through Video Prediction

tensorflow/models NeurIPS 2016

A core challenge for an agent learning to interact with the world is to predict how its actions affect objects in its environment.

Temporal Generative Adversarial Nets with Singular Value Clipping

universome/stylegan-v ICCV 2017

In this paper, we propose a generative model, Temporal Generative Adversarial Nets (TGAN), which can learn a semantic representation of unlabeled videos, and is capable of generating videos.

Stochastic Variational Video Prediction

StanfordVL/roboturk_real_dataset ICLR 2018

We find that our proposed method produces substantially improved video predictions when compared to the same model without stochasticity, and to other stochastic video prediction methods.

Hierarchical Video Generation from Orthogonal Information: Optical Flow and Texture

mil-tokyo/FTGAN 27 Nov 2017

FlowGAN generates optical flow, which contains only the edge and motion of the videos to be begerated.

Stochastic Video Generation with a Learned Prior

edenton/svg ICML 2018

Sample generations are both varied and sharp, even many frames into the future, and compare favorably to those from existing approaches.