MultiSubs: A Large-scale Multimodal and Multilingual Dataset

This paper introduces a large-scale multimodal and multilingual dataset that aims to facilitate research on grounding words to images in their contextual usage in language. The dataset consists of images selected to unambiguously illustrate concepts expressed in sentences from movie subtitles. The dataset is a valuable resource as (i) the images are aligned to text fragments rather than whole sentences; (ii) multiple images are possible for a text fragment and a sentence; (iii) the sentences are free-form and real-world like; (iv) the parallel texts are multilingual. We set up a fill-in-the-blank game for humans to evaluate the quality of the automatic image selection process of our dataset. We show the utility of the dataset on two automatic tasks: (i) fill-in-the-blank; (ii) lexical translation. Results of the human evaluation and automatic models demonstrate that images can be a useful complement to the textual context. The dataset will benefit research on visual grounding of words especially in the context of free-form sentences, and can be obtained from https://doi.org/10.5281/zenodo.5034604 under a Creative Commons licence.

PDF Abstract LREC 2022 PDF LREC 2022 Abstract

Datasets


Introduced in the Paper:

MultiSubs

Used in the Paper:

Visual Question Answering OpenSubtitles
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Multimodal Text Prediction MultiSubs 9-gram LM with back-off Accuracy 30.35 # 1
Word similarity 0.44 # 1
Multimodal Lexical Translation MultiSubs English-French Multimodal BRNN ALI 0.81 # 1
Multimodal Lexical Translation MultiSubs English-German Multimodal BRNN ALI 0.94 # 1
Multimodal Lexical Translation MultiSubs English-Portuguese Multimodal BRNN ALI 0.80 # 1
Multimodal Lexical Translation MultiSubs English-Spanish Multimodal BRNN ALI 0.81 # 1

Methods


No methods listed for this paper. Add relevant methods here