Mutual Suppression Network for Video Prediction using Disentangled Features

13 Apr 2018  ·  Jungbeom Lee, Jangho Lee, Sungmin Lee, Sungroh Yoon ·

Video prediction has been considered a difficult problem because the video contains not only high-dimensional spatial information but also complex temporal information. Video prediction can be performed by finding features in recent frames, and using them to generate approximations to upcoming frames. We approach this problem by disentangling spatial and temporal features in videos. We introduce a mutual suppression network (MSnet) which are trained in an adversarial manner and then produces spatial features which are free of motion information, and motion features with no spatial information. MSnet then uses motion-guided connection within an encoder-decoder-based architecture to transform spatial features from a previous frame to the time of an upcoming frame. We show how MSnet can be used for video prediction using disentangled representations. We also carry out experiments to assess the effectiveness of our method to disentangle features. MSnet obtains better results than other recent video prediction methods even though it has simpler encoders.

PDF Abstract

Datasets


Results from the Paper


 Ranked #1 on Video Prediction on KTH (Cond metric)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Video Prediction KTH MSNET PSNR 27.08 # 17
SSIM 0.876 # 5
Cond 10 # 1
Pred 20 # 1

Methods


No methods listed for this paper. Add relevant methods here