Does Multimodality Help Human and Machine for Translation and Image Captioning?

WS 2016 Ozan CaglayanWalid AransaYaxing WangMarc MasanaMercedes García-MartínezFethi BougaresLoïc BarraultJoost van de Weijer

This paper presents the systems developed by LIUM and CVC for the WMT16 Multimodal Machine Translation challenge. We explored various comparative methods, namely phrase-based systems and attentional recurrent neural networks models trained using monomodal or multimodal data... (read more)

PDF Abstract

Evaluation Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help community compare results to other papers.