MSPred: Video Prediction at Multiple Spatio-Temporal Scales with Hierarchical Recurrent Networks

17 Mar 2022  ·  Angel Villar-Corrales, Ani Karapetyan, Andreas Boltres, Sven Behnke ·

Autonomous systems not only need to understand their current environment, but should also be able to predict future actions conditioned on past states, for instance based on captured camera frames. However, existing models mainly focus on forecasting future video frames for short time-horizons, hence being of limited use for long-term action planning. We propose Multi-Scale Hierarchical Prediction (MSPred), a novel video prediction model able to simultaneously forecast future possible outcomes of different levels of granularity at different spatio-temporal scales. By combining spatial and temporal downsampling, MSPred efficiently predicts abstract representations such as human poses or locations over long time horizons, while still maintaining a competitive performance for video frame prediction. In our experiments, we demonstrate that MSPred accurately predicts future video frames as well as high-level representations (e.g. keypoints or semantics) on bin-picking and action recognition datasets, while consistently outperforming popular approaches for future frame prediction. Furthermore, we ablate different modules and design choices in MSPred, experimentally validating that combining features of different spatial and temporal granularity leads to a superior performance. Code and models to reproduce our experiments can be found in

PDF Abstract

Results from the Paper

 Ranked #1 on Video Prediction on KTH (LPIPS metric)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Video Prediction KTH MSPred LPIPS 0.029 # 1
PSNR 27.81 # 10
SSIM 0.951 # 1
MSE 23.18 # 1
Video Prediction Moving MNIST MSPred MSE 34.44 # 19
SSIM 0.975 # 1
LPIPS 0.024 # 1
PSNR 26.82 # 1
Video Prediction SynpickVP MSPred MSE 53.09 # 3
PSNR 27.89 # 1
SSIM 0.881 # 3
LPIPS 0.033 # 1