Does Multimodality Help Human and Machine for Translation and Image Captioning?
This paper presents the systems developed by LIUM and CVC for the WMT16 Multimodal Machine Translation challenge. We explored various comparative methods, namely phrase-based systems and attentional recurrent neural networks models trained using monomodal or multimodal data. We also performed a human evaluation in order to estimate the usefulness of multimodal data for human machine translation and image description generation. Our systems obtained the best results for both tasks according to the automatic evaluation metrics BLEU and METEOR.PDF Abstract WS 2016 PDF WS 2016 Abstract
Results from the Paper
Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.
No methods listed for this paper. Add relevant methods here