Unmasked Teacher: Towards Training-Efficient Video Foundation Models

28 Mar 2023  ·  Kunchang Li, Yali Wang, Yizhuo Li, Yi Wang, Yinan He, LiMin Wang, Yu Qiao ·

Video Foundation Models (VFMs) have received limited exploration due to high computational costs and data scarcity. Previous VFMs rely on Image Foundation Models (IFMs), which face challenges in transferring to the video domain. Although VideoMAE has trained a robust ViT from limited data, its low-level reconstruction poses convergence difficulties and conflicts with high-level cross-modal alignment. This paper proposes a training-efficient method for temporal-sensitive VFMs that integrates the benefits of existing methods. To increase data efficiency, we mask out most of the low-semantics video tokens, but selectively align the unmasked tokens with IFM, which serves as the UnMasked Teacher (UMT). By providing semantic guidance, our method enables faster convergence and multimodal friendliness. With a progressive pre-training framework, our model can handle various tasks including scene-related, temporal-related, and complex video-language understanding. Using only public sources for pre-training in 6 days on 32 A100 GPUs, our scratch-built ViT-L/16 achieves state-of-the-art performances on various video tasks. The code and models will be released at https://github.com/OpenGVLab/unmasked_teacher.

PDF Abstract

Results from the Paper


 Ranked #1 on Zero-Shot Video Retrieval on LSMDC (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Benchmark
Zero-Shot Video Retrieval ActivityNet UMT-L (ViT-L/16) text-to-video R@1 42.8 # 1
video-to-text R@1 40.7 # 1
text-to-video R@10 79.8 # 1
text-to-video R@5 69.6 # 1
video-to-text R@5 67.6 # 1
video-to-text R@10 78.6 # 1
Video Retrieval ActivityNet UMT-L (ViT-L/16) text-to-video R@1 66.8 # 2
text-to-video R@5 89.1 # 2
text-to-video R@10 94.9 # 2
video-to-text R@1 64.4 # 1
video-to-text R@5 89.1 # 1
video-to-text R@10 94.8 # 1
Video Question Answering ActivityNet-QA UMT-L (ViT-L/16) Accuracy 0.479 # 3
Action Recognition AVA v2.2 UMT-L (ViT-L/16) mAP 39.8 # 6
Zero-Shot Video Retrieval DiDeMo UMT-L (ViT-L/16) text-to-video R@1 48.6 # 1
text-to-video R@5 72.9 # 1
text-to-video R@10 79.0 # 2
video-to-text R@1 49.9 # 1
video-to-text R@5 74.8 # 1
video-to-text R@10 81.4 # 1
Video Retrieval DiDeMo UMT-L (ViT-L/16) text-to-video R@1 70.4 # 1
text-to-video R@5 90.1 # 1
text-to-video R@10 93.5 # 1
video-to-text R@1 65.7 # 1
video-to-text R@10 93.3 # 1
video-to-text R@5 89.6 # 1
Action Classification Kinetics-400 UMT-L (ViT-L/16) Acc@1 90.6 # 3
Acc@5 98.7 # 2
Action Classification Kinetics-600 UMT-L (ViT-L/16) Top-1 Accuracy 90.5 # 6
Top-5 Accuracy 98.8 # 2
Action Classification Kinetics-700 UMT-L (ViT-L/16) Top-1 Accuracy 83.6 # 3
Top-5 Accuracy 96.7 # 1
Zero-Shot Video Retrieval LSMDC UMT-L (ViT-L/16) text-to-video R@1 25.2 # 1
video-to-text R@1 23.2 # 1
text-to-video R@5 43.0 # 2
text-to-video R@10 50.5 # 2
video-to-text R@5 37.7 # 1
video-to-text R@10 44.2 # 1
Video Retrieval LSMDC UMT-L (ViT-L/16) text-to-video R@1 43.0 # 1
text-to-video R@5 65.5 # 2
text-to-video R@10 73.0 # 2
video-to-text R@1 41.4 # 1
video-to-text R@5 64.3 # 2
video-to-text R@10 71.5 # 2
Action Classification Moments in Time UMT-L (ViT-L/16) Top 1 Accuracy 48.7 # 2
Top 5 Accuracy 78.2 # 1
Video Retrieval MSR-VTT UMT-L (ViT-L/16) text-to-video R@1 58.8 # 2
text-to-video R@5 81.0 # 2
text-to-video R@10 87.1 # 3
video-to-text R@1 58.6 # 3
video-to-text R@5 81.6 # 4
video-to-text R@10 86.5 # 4
Zero-Shot Video Retrieval MSR-VTT UMT-L (ViT-L/16) text-to-video R@1 42.6 # 3
text-to-video R@5 64.4 # 3
text-to-video R@10 73.1 # 3
video-to-text R@1 38.6 # 3
video-to-text R@5 59.8 # 2
video-to-text R@10 69.6 # 2
Visual Question Answering (VQA) MSRVTT-QA UMT-L (ViT-L/16) Accuracy 0.471 # 5
Video Retrieval MSVD UMT-L (ViT-L/16) text-to-video R@1 80.3 # 1
text-to-video R@5 98.1 # 1
text-to-video R@10 99.0 # 1
video-to-text R@1 81.2 # 1
video-to-text R@5 96.7 # 1
video-to-text R@10 98,7 # 12
Zero-Shot Video Retrieval MSVD UMT-L (ViT-L/16) text-to-video R@1 72.2 # 1
video-to-text R@1 72.4 # 1
text-to-video R@5 94.2 # 1
text-to-video R@10 96.9 # 1
video-to-text R@5 93.4 # 1
video-to-text R@10 95.8 # 1
Visual Question Answering (VQA) MSVD-QA UMT-L (ViT-L/16) Accuracy 0.552 # 8
Video Retrieval SSv2-label retrieval UMT-L (ViT-L/16) text-to-video R@1 73.3 # 1
text-to-video R@5 92.7 # 1
text-to-video R@10 96.6 # 1
Video Retrieval SSv2-template retrieval UMT-L (ViT-L/16) text-to-video R@1 90.8 # 1
text-to-video R@5 100.0 # 1
text-to-video R@10 100.0 # 1

Methods