Latent Video Transformer

18 Jun 2020  ·  Ruslan Rakhimov, Denis Volkhonskiy, Alexey Artemov, Denis Zorin, Evgeny Burnaev ·

The video generation task can be formulated as a prediction of future video frames given some past frames. Recent generative models for videos face the problem of high computational requirements. Some models require up to 512 Tensor Processing Units for parallel training. In this work, we address this problem via modeling the dynamics in a latent space. After the transformation of frames into the latent space, our model predicts latent representation for the next frames in an autoregressive manner. We demonstrate the performance of our approach on BAIR Robot Pushing and Kinetics-600 datasets. The approach tends to reduce requirements to 8 Graphical Processing Units for training the models while maintaining comparable generation quality.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Video Generation BAIR Robot Pushing Baseline (from LVT) FVD score 320.9 # 28
Cond 1 # 1
Pred 15 # 8
Train 15 # 2
Video Generation BAIR Robot Pushing LVT FVD score 125.76±2.90 # 14
Cond 1 # 1
Pred 15 # 8
Train 15 # 2
Video Prediction Kinetics-600 12 frames, 64x64 LVT FVD 224.73 # 13
Cond 5 # 2
Pred 11 # 2

Methods


No methods listed for this paper. Add relevant methods here