Efficient Video Representation Learning via Motion-Aware Token Selection

19 Nov 2022  ·  Sunil Hwang, Jaehong Yoon, Youngwan Lee, Sung Ju Hwang ·

Recently emerged Masked Video Modeling techniques demonstrated their potential by significantly outperforming previous methods in self-supervised learning for video. However, they require an excessive amount of computations and memory while predicting uninformative tokens/frames due to random masking strategies, requiring excessive computing power for training. (e.g., over 16 nodes with 128 NVIDIA A100 GPUs). To resolve this issue, we exploit the unequal information density among the patches in videos and propose a new token selection method, MATS: Motion-Aware Token Selection, that finds tokens containing rich motion features and drops uninformative ones during both self-supervised pre-training and fine-tuning. We further present an adaptive frame selection strategy that allows the model to focus on informative and causal frames with minimal redundancy. Our method significantly reduces computation and memory requirements, enabling the pre-training and fine-tuning on a single machine with 8 GPUs while achieving comparable performance to computation- and memory-heavy state-of-the-art methods on multiple benchmarks and on the uncurated Ego4D dataset. We are hopeful that the efficiency of our MATS will contribute to reducing the barrier to conducting further research on self-supervised learning for videos.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Object State Change Classification Ego4D VideoMS (ViT-B) Acc 76.2 # 1
Self-Supervised Action Recognition HMDB51 VideoMS (ViT-B) Top-1 Accuracy 65.8 # 15
Pre-Training Dataset no extra data # 1
Frozen false # 1
Self-Supervised Action Recognition UCF101 VideoMS (ViT-B) 3-fold Accuracy 93.4 # 11
Pre-Training Dataset no extra data # 1
Frozen false # 1

Methods


No methods listed for this paper. Add relevant methods here