SSW-GAN: Scalable Stage-wise Training of Video GANs

1 Jan 2021  ·  Lluis Castrejon, Nicolas Ballas, Aaron Courville ·

Current state-of-the-art generative models for videos have high computational requirements that impede high resolution generations beyond a few frames. In this work we propose a stage-wise strategy to train Generative Adversarial Networks (GANs) for videos, which first produces a downsampled video that is upscaled and temporally interpolated by subsequent stages. Stages are trained sequentially and upsampling is performed locally on temporal chunks of previous outputs to manage the computational complexity. We impose global consistency on the full resolution generation by conditioning on the downsampled video, which grounds the local upsampling operations. We validate our approach on Kinetics600 and BDD100K, for which we train a three stage model capable of generating 128x128 videos with 100 frames.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here