VATT: Transformers for Multimodal Self-Supervised Learning from Raw Video, Audio and Text

We present a framework for learning multimodal representations from unlabeled data using convolution-free Transformer architectures. Specifically, our Video-Audio-Text Transformer (VATT) takes raw signals as inputs and extracts multimodal representations that are rich enough to benefit a variety of downstream tasks. We train VATT end-to-end from scratch using multimodal contrastive losses and evaluate its performance by the downstream tasks of video action recognition, audio event classification, image classification, and text-to-video retrieval. Furthermore, we study a modality-agnostic, single-backbone Transformer by sharing weights among the three modalities. We show that the convolution-free VATT outperforms state-of-the-art ConvNet-based architectures in the downstream tasks. Especially, VATT's vision Transformer achieves the top-1 accuracy of 82.1% on Kinetics-400, 83.6% on Kinetics-600, 72.7% on Kinetics-700, and 41.1% on Moments in Time, new records while avoiding supervised pre-training. Transferring to image classification leads to 78.7% top-1 accuracy on ImageNet compared to 64.7% by training the same Transformer from scratch, showing the generalizability of our model despite the domain gap between videos and images. VATT's audio Transformer also sets a new record on waveform-based audio event recognition by achieving the mAP of 39.4% on AudioSet without any supervised pre-training. VATT's source code is publicly available.

PDF Abstract NeurIPS 2021 PDF NeurIPS 2021 Abstract

Results from the Paper

Ranked #6 on Action Classification on Moments in Time (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Audio Classification AudioSet VATT-Base Test mAP 0.394 # 17
AUC 0.971 # 7
d-prime 2.895 # 2
Action Classification Kinetics-400 VATT-Large Vid acc@1 82.1 # 26
Vid acc@5 95.5 # 16
Action Classification Kinetics-600 VATT-Large Top-1 Accuracy 83.6 # 23
Top-5 Accuracy 96.6 # 12
Action Classification Moments in Time VATT-Large Top 1 Accuracy 41.1 # 6
Top 5 Accuracy 67.7 # 4