Transformation-based Adversarial Video Prediction on Large-Scale Data

Recent breakthroughs in adversarial generative modeling have led to models capable of producing video samples of high quality, even on large and complex datasets of real-world video. In this work, we focus on the task of video prediction, where given a sequence of frames extracted from a video, the goal is to generate a plausible future sequence. We first improve the state of the art by performing a systematic empirical study of discriminator decompositions and proposing an architecture that yields faster convergence and higher performance than previous approaches. We then analyze recurrent units in the generator, and propose a novel recurrent unit which transforms its past hidden state according to predicted motion-like features, and refines it to handle dis-occlusions, scene changes and other complex behavior. We show that this recurrent unit consistently outperforms previous designs. Our final model leads to a leap in the state-of-the-art performance, obtaining a test set Frechet Video Distance of 25.7, down from 69.2, on the large-scale Kinetics-600 dataset.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Video Generation BAIR Robot Pushing TrIVD-GAN-FP FVD score 103.3 # 9
Cond 1 # 1
Pred 15 # 8
Train 15 # 2
Video Prediction Kinetics-600 12 frames, 64x64 TriVD-GAN-FP FVD 25.74±0.66 # 8
Cond 5 # 2
Pred 11 # 2
IS 12.54±0.06 # 3

Methods