Self-supervised Co-training for Video Representation Learning

NeurIPS 2020  ·  Tengda Han, Weidi Xie, Andrew Zisserman ·

The objective of this paper is visual-only self-supervised video representation learning. We make the following contributions: (i) we investigate the benefit of adding semantic-class positives to instance-based Info Noise Contrastive Estimation (InfoNCE) training, showing that this form of supervised contrastive learning leads to a clear improvement in performance; (ii) we propose a novel self-supervised co-training scheme to improve the popular infoNCE loss, exploiting the complementary information from different views, RGB streams and optical flow, of the same data source by using one view to obtain positive class samples for the other; (iii) we thoroughly evaluate the quality of the learnt representation on two different downstream tasks: action recognition and video retrieval. In both cases, the proposed approach demonstrates state-of-the-art or comparable performance with other self-supervised approaches, whilst being significantly more efficient to train, i.e. requiring far less training data to achieve similar performance.

PDF Abstract NeurIPS 2020 PDF NeurIPS 2020 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Self-Supervised Action Recognition HMDB51 (finetuned) CoCLR Top-1 Accuracy 54.6 # 12
Self-Supervised Action Recognition UCF101 (finetuned) CoCLR 3-fold Accuracy 87.9 # 12
Pretrain K400 # 1

Results from Other Papers


Task Dataset Model Metric Name Metric Value Rank Source Paper Compare
Self-Supervised Action Recognition HMDB51 CoCLR Top-1 Accuracy 46.1 # 34
Frozen false # 1
Self-Supervised Action Recognition UCF101 CoCLR 3-fold Accuracy 74.5 # 34
Frozen false # 1

Methods