Unsupervised Learning of Object Structure and Dynamics from Videos

Extracting and predicting object structure and dynamics from videos without supervision is a major challenge in machine learning. To address this challenge, we adopt a keypoint-based image representation and learn a stochastic dynamics model of the keypoints. Future frames are reconstructed from the keypoints and a reference frame. By modeling dynamics in the keypoint coordinate space, we achieve stable learning and avoid compounding of errors in pixel space. Our method improves upon unstructured representations both for pixel-level video prediction and for downstream tasks requiring object-level understanding of motion dynamics. We evaluate our model on diverse datasets: a multi-agent sports dataset, the Human3.6M dataset, and datasets based on continuous control tasks from the DeepMind Control Suite. The spatially structured representation outperforms unstructured representations on a range of motion-related tasks such as object tracking, action recognition and reward prediction.

PDF Abstract NeurIPS 2019 PDF NeurIPS 2019 Abstract

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Video Prediction KTH Struct-VRNN (from Grid-keypoints) LPIPS 0.124 # 8
PSNR 24.29 # 26
FVD 395.0 # 11
SSIM 0.766 # 26
Cond 10 # 1
Pred 40 # 22
Params (M) 2.3 # 2
Train 10 # 1


No methods listed for this paper. Add relevant methods here