A Visual Attention Grounding Neural Model for Multimodal Machine Translation

We introduce a novel multimodal machine translation model that utilizes parallel visual and textual information. Our model jointly optimizes the learning of a shared visual-language embedding and a translator. The model leverages a visual attention grounding mechanism that links the visual semantics with the corresponding textual semantics. Our approach achieves competitive state-of-the-art results on the Multi30K and the Ambiguous COCO datasets. We also collected a new multilingual multimodal product description dataset to simulate a real-world international online shopping scenario. On this dataset, our visual attention grounding model outperforms other methods by a large margin.

PDF Abstract EMNLP 2018 PDF EMNLP 2018 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Multimodal Machine Translation Multi30K VAG-NMT BLEU (EN-DE) 31.6 # 12
Meteor (EN-DE) 52.2 # 11
Meteor (EN-FR) 70.3 # 4

Methods


No methods listed for this paper. Add relevant methods here