Distilling Translations with Visual Awareness

ACL 2019  ·  Julia Ive, Pranava Madhyastha, Lucia Specia ·

Previous work on multimodal machine translation has shown that visual information is only needed in very specific cases, for example in the presence of ambiguous words where the textual context is not sufficient. As a consequence, models tend to learn to ignore this information. We propose a translate-and-refine approach to this problem where images are only used by a second stage decoder. This approach is trained jointly to generate a good first draft translation and to improve over this draft by (i) making better use of the target language textual context (both left and right-side contexts) and (ii) making use of visual context. This approach leads to the state of the art results. Additionally, we show that it has the ability to recover from erroneous or missing words in the source language.

PDF Abstract ACL 2019 PDF ACL 2019 Abstract

Results from the Paper


Ranked #3 on Multimodal Machine Translation on Multi30K (Meteor (EN-FR) metric)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Multimodal Machine Translation Multi30K del Meteor (EN-FR) 74.6 # 3
Multimodal Machine Translation Multi30K del+obj BLEU (EN-DE) 38 # 8
Meteor (EN-DE) 55.6 # 8

Methods


No methods listed for this paper. Add relevant methods here