VALOR: Vision-Audio-Language Omni-Perception Pretraining Model and Dataset

17 Apr 2023  ยท  Sihan Chen, Xingjian He, Longteng Guo, Xinxin Zhu, Weining Wang, Jinhui Tang, Jing Liu ยท

In this paper, we propose a Vision-Audio-Language Omni-peRception pretraining model (VALOR) for multi-modal understanding and generation. Different from widely-studied vision-language pretraining models, VALOR jointly models relationships of vision, audio and language in an end-to-end manner. It contains three separate encoders for single modality representations, and a decoder for multimodal conditional text generation. We design two pretext tasks to pretrain VALOR model, including Multimodal Grouping Alignment (MGA) and Multimodal Grouping Captioning (MGC). MGA projects vision, language and audio to the same common space, building vision-language, audio-language and audiovisual-language alignment simultaneously. MGC learns how to generate text tokens in conditions of vision, audio or their both. To promote vision-audio-language pretraining research, we construct a large-scale high-quality tri-modality dataset named VALOR-1M, which contains 1M audiable videos with human annotated audiovisual captions. Extensive experiments show that VALOR can learn strong multimodal correlations and be generalized to various downstream tasks (e.g., retrieval, captioning and question answering), with different input modalities (e.g., vision-language, audio-language and audiovisual-language). VALOR achieves new state-of-the-art performances on series of public cross-modality benchmarks. Code and data are available at project page https://casia-iva-group.github.io/projects/VALOR.

PDF Abstract

Results from the Paper


 Ranked #1 on Video Captioning on VATEX (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Video Retrieval ActivityNet VALOR text-to-video R@1 70.1 # 3
text-to-video R@5 90.8 # 2
text-to-video R@10 95.3 # 2
Video Question Answering ActivityNet-QA VALOR Accuracy 48.6 # 9
Text to Audio Retrieval AudioCaps VALOR R@1 40.1 # 4
R@10 83.1 # 4
R@5 73.9 # 3
Audio captioning AudioCaps VALOR CIDEr 0.741 # 9
BLEU-4 0.270 # 3
METEOR 0.231 # 6
ROUGE-L 0.494 # 3
Zero-shot Text to Audio Retrieval Clotho VALOR text-to-audio R@1 8.4 # 5
Text to Audio Retrieval Clotho VALOR R@1 17.5 # 4
R@10 55.3 # 3
R@5 42.7 # 3
Audio captioning Clotho VALOR CIDEr 0.423 # 4
BLEU-4 16.2 # 2
METEOR 17.4 # 2
ROUGE-L 38.2 # 2
Cross-Modal Retrieval COCO 2014 VALOR Text-to-image R@1 61.4 # 14
Text-to-image R@10 90.9 # 11
Text-to-image R@5 84.4 # 13
Image Captioning COCO Captions VALOR CIDER 152.5 # 3
SPICE 25.7 # 5
Video Retrieval DiDeMo VALOR text-to-video R@1 61.5 # 6
text-to-video R@5 85.3 # 6
text-to-video R@10 90.4 # 6
Video Retrieval LSMDC VALOR text-to-video R@1 34.2 # 7
text-to-video R@5 56.0 # 4
text-to-video R@10 64.1 # 5
Video Retrieval MSR-VTT VALOR text-to-video R@1 59.9 # 3
text-to-video R@5 83.5 # 2
text-to-video R@10 89.6 # 1
Video Captioning MSR-VTT VALOR CIDEr 74.0 # 6
METEOR 32.9 # 5
ROUGE-L 68.0 # 4
BLEU-4 54.4 # 5
Video Question Answering MSRVTT-QA VALOR Accuracy 49.2 # 3
Video Captioning MSVD VALOR CIDEr 178.5 # 3
BLEU-4 80.7 # 1
METEOR 51.0 # 2
ROUGE-L 87.9 # 1
Visual Question Answering (VQA) MSVD-QA VALOR Accuracy 0.60 # 4
Audio-visual Question Answering MUSIC-AVQA VALOR Acc 78.9 # 2
TGIF-Frame TGIF-QA VALOR Accuracy 78.7 # 4
Audio-Visual Captioning VALOR-32K VALOR METEOR 15.4 # 1
ROUGE-L 31.8 # 1
CIDEr 61.5 # 2
BLEU-4 9.6 # 2
text-to-audiovisual retrieval VALOR-32K VALOR text-to-audiovisual R@1 80.9 # 1
text-to-audiovisual R@5 93.9 # 1
text-to-audiovisual R@10 97.1 # 1
Video Captioning VATEX VALOR BLEU-4 45.6 # 1
CIDEr 95.8 # 3
METEOR 29.4 # 1
ROUGE-L 57.4 # 1
Video Retrieval VATEX VALOR text-to-video R@1 78.5 # 2
text-to-video R@10 98.7 # 2
text-to-video R@5 97.1 # 4
Visual Question Answering (VQA) VQA v2 test-dev VALOR Accuracy 78.46 # 15
Visual Question Answering (VQA) VQA v2 test-std VALOR overall 78.62 # 10

Methods


No methods listed for this paper. Add relevant methods here