Large-scale weakly-supervised pre-training for video action recognition

Current fully-supervised video datasets consist of only a few hundred thousand videos and fewer than a thousand domain-specific labels. This hinders the progress towards advanced video architectures. This paper presents an in-depth study of using large volumes of web videos for pre-training video models for the task of action recognition. Our primary empirical finding is that pre-training at a very large scale (over 65 million videos), despite on noisy social-media videos and hashtags, substantially improves the state-of-the-art on three challenging public action recognition datasets. Further, we examine three questions in the construction of weakly-supervised video action datasets. First, given that actions involve interactions with objects, how should one construct a verb-object pre-training label space to benefit transfer learning the most? Second, frame-based models perform quite well on action recognition; is pre-training for good image features sufficient or is pre-training for spatio-temporal features valuable for optimal transfer learning? Finally, actions are generally less well-localized in long videos vs. short videos; since action labels are provided at a video level, how should one choose video clips for best performance, given some fixed budget of number or minutes of videos?

PDF Abstract CVPR 2019 PDF CVPR 2019 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Egocentric Activity Recognition EPIC-KITCHENS-55 R(2+1)D-152-SE (ig) Actions Top-1 (S2) 25.6 # 2
Egocentric Activity Recognition EPIC-KITCHENS-55 R(2+1)D-34 (kinetics) Actions Top-1 (S2) 16.8 # 6
Action Classification Kinetics-400 irCSN-152 (IG-Kinetics-65M pretrain) Acc@1 82.8 # 64

Methods


No methods listed for this paper. Add relevant methods here