Self-Supervised Video Representation Learning with Space-Time Cubic Puzzles

24 Nov 2018  ·  Dahun Kim, Donghyeon Cho, In So Kweon ·

Self-supervised tasks such as colorization, inpainting and zigsaw puzzle have been utilized for visual representation learning for still images, when the number of labeled images is limited or absent at all. Recently, this worthwhile stream of study extends to video domain where the cost of human labeling is even more expensive. However, the most of existing methods are still based on 2D CNN architectures that can not directly capture spatio-temporal information for video applications. In this paper, we introduce a new self-supervised task called as \textit{Space-Time Cubic Puzzles} to train 3D CNNs using large scale video dataset. This task requires a network to arrange permuted 3D spatio-temporal crops. By completing \textit{Space-Time Cubic Puzzles}, the network learns both spatial appearance and temporal relation of video frames, which is our final goal. In experiments, we demonstrate that our learned 3D representation is well transferred to action recognition tasks, and outperforms state-of-the-art 2D CNN-based competitors on UCF101 and HMDB51 datasets.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Self-Supervised Action Recognition HMDB51 3D Cubic Puzzles (3D ResNet-18) Top-1 Accuracy 33.7 # 42
Pre-Training Dataset Kinetics400 # 1
Frozen false # 1
Self-Supervised Action Recognition UCF101 3D Cubic Puzzles (3D ResNet-18) 3-fold Accuracy 65.8 # 42
Pre-Training Dataset Kinetics400 # 1
Frozen false # 1

Methods