Contrastive Audio-Visual Masked Autoencoder

In this paper, we first extend the recent Masked Auto-Encoder (MAE) model from a single modality to audio-visual multi-modalities. Subsequently, we propose the Contrastive Audio-Visual Masked Auto-Encoder (CAV-MAE) by combining contrastive learning and masked data modeling, two major self-supervised learning frameworks, to learn a joint and coordinated audio-visual representation. Our experiments show that the contrastive audio-visual correspondence learning objective not only enables the model to perform audio-visual retrieval tasks, but also helps the model learn a better joint representation. As a result, our fully self-supervised pretrained CAV-MAE achieves a new SOTA accuracy of 65.9% on VGGSound, and is comparable with the previous best supervised pretrained model on AudioSet in the audio-visual event classification task. Code and pretrained models are at https://github.com/yuangongnd/cav-mae.

PDF Abstract

Results from the Paper


 Ranked #1 on Audio Tagging on AudioSet (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Multi-modal Classification AudioSet CAV-MAE Average mAP 0.512 # 1
Audio Classification AudioSet CAV-MAE (Audio-Visual) Test mAP 0.512 # 4
Audio Classification AudioSet CAV-MAE (Audio-Only) Test mAP 0.466 # 24
Audio Tagging AudioSet CAV-MAE (Audio-Visual) mean average precision 0.512 # 1
Audio Tagging AudioSet CAV-MAE (Audio-Only) mean average precision 0.466 # 9
Audio Classification AudioSet CAV-MAE (Visual-Only) Test mAP 0.262 # 39
Audio Classification VGGSound CAV-MAE (Audio-Visual) Top 1 Accuracy 65.9 # 5
Audio Classification VGGSound CAV-MAE (Audio-Only) Top 1 Accuracy 59.5 # 10
Multi-modal Classification VGG-Sound CAV-MAE (Audio-Visual) Top-1 Accuracy 65.9 # 2

Methods