Predicting Video with VQVAE

2 Mar 2021  ·  Jacob Walker, Ali Razavi, Aäron van den Oord ·

In recent years, the task of video prediction-forecasting future video given past video frames-has attracted attention in the research community. In this paper we propose a novel approach to this problem with Vector Quantized Variational AutoEncoders (VQ-VAE). With VQ-VAE we compress high-resolution videos into a hierarchical set of multi-scale discrete latent variables. Compared to pixels, this compressed latent space has dramatically reduced dimensionality, allowing us to apply scalable autoregressive generative models to predict video. In contrast to previous work that has largely emphasized highly constrained datasets, we focus on very diverse, large-scale datasets such as Kinetics-600. We predict video at a higher resolution on unconstrained videos, 256x256, than any other previous method to our knowledge. We further validate our approach against prior work via a crowdsourced human evaluation.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Video Prediction Kinetics-600 12 frames, 64x64 Video VQ-VAE FVD FVD 64.30±2.04 # 10
Cond 4 # 1
Pred 12 # 1

Methods