Most successful self-supervised learning methods are trained to align the representations of two independent views from the data. State-of-the-art methods in video are inspired by image techniques, where these two views are similarly extracted by cropping and augmenting the resulting crop. However, these methods miss a crucial element in the video domain: time. We introduce BraVe, a self-supervised learning framework for video. In BraVe, one of the views has access to a narrow temporal window of the video while the other view has a broad access to the video content. Our models learn to generalise from the narrow view to the general content of the video. Furthermore, BraVe processes the views with different backbones, enabling the use of alternative augmentations or modalities into the broad view such as optical flow, randomly convolved RGB frames, audio or their combinations. We demonstrate that BraVe achieves state-of-the-art results in self-supervised representation learning on standard video and audio classification benchmarks including UCF101, HMDB51, Kinetics, ESC-50 and AudioSet.

PDF Abstract ICCV 2021 PDF ICCV 2021 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Self-Supervised Audio Classification AudioSet (MLP) BraVe:V-FA (TSM-50x2) Top-1 Accuracy 34.8 # 1
Self-Supervised Audio Classification ESC-50 BraVe:V-FA (TSM-50x2) Top-1 Accuracy 91.1 # 1
Self-Supervised Action Recognition HMDB51 BraVe:V-FA (TSM-50x2) Top-1 Accuracy 70.5 # 3
Frozen false # 1
Self-Supervised Action Recognition HMDB51 (finetuned) BraVe:V-FA (TSM-50x2) Top-1 Accuracy 77.8 # 1
Self-Supervised Action Recognition Kinetics-600 BraVe:V-FA (TSM-50x2) Top-1 Accuracy 71.4 # 3
Self-Supervised Action Recognition UCF101 BraVe:V-FA (TSM-50x2) 3-fold Accuracy 93.1 # 7
Frozen false # 1
Self-Supervised Action Recognition UCF101 (finetuned) BraVe:V-FA (TSM-50x2) 3-fold Accuracy 95.7 # 1

Methods


No methods listed for this paper. Add relevant methods here