Learning Audio-Video Modalities from Image Captions

A major challenge in text-video and text-audio retrieval is the lack of large-scale training data. This is unlike image-captioning, where datasets are in the order of millions of samples. To close this gap we propose a new video mining pipeline which involves transferring captions from image captioning datasets to video clips with no additional manual effort. Using this pipeline, we create a new large-scale, weakly labelled audio-video captioning dataset consisting of millions of paired clips and captions. We show that training a multimodal transformed based model on this data achieves competitive performance on video retrieval and video captioning, matching or even outperforming HowTo100M pretraining with 20x fewer clips. We also show that our mined clips are suitable for text-audio pretraining, and achieve state of the art results for the task of audio retrieval.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Zero-shot Text to Audio Retrieval AudioCaps AVFIC R@10 37.7 # 5
Audio-to-text R@1 8.7 # 6
Zero-shot Text to Audio Retrieval Clotho AVFIC text-to-audio R@1 3.0 # 7
text-to-audio R@10 17.5 # 5
Zero-Shot Video Retrieval MSR-VTT A. Nagrani et. al. text-to-video R@1 19.4 # 28
text-to-video R@5 39.5 # 27
text-to-video R@10 50.3 # 27

Methods


No methods listed for this paper. Add relevant methods here