Doubly-Attentive Decoder for Multi-modal Neural Machine Translation

ACL 2017  ·  Iacer Calixto, Qun Liu, Nick Campbell ·

We introduce a Multi-modal Neural Machine Translation model in which a doubly-attentive decoder naturally incorporates spatial visual features obtained using pre-trained convolutional neural networks, bridging the gap between image description and translation. Our decoder learns to attend to source-language words and parts of an image independently by means of two separate attention mechanisms as it generates words in the target language. We find that our model can efficiently exploit not just back-translated in-domain multi-modal data but also large general-domain text-only MT corpora. We also report state-of-the-art results on the Multi30k data set.

PDF Abstract ACL 2017 PDF ACL 2017 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Multimodal Machine Translation Multi30K NMTSRC+IMG BLEU (EN-DE) 37.1 # 11
Meteor (EN-DE) 54.5 # 10

Methods


No methods listed for this paper. Add relevant methods here