no code implementations • AMTA 2016 • Ander Martinez, Yuji Matsumoto
This article combines three different ideas (splitting words into smaller units, using an extra dataset of a related language pair and using monolingual data) for improving the performance of NMT models on language pairs with limited data.
no code implementations • WMT (EMNLP) 2021 • Ander Martinez
This paper describes the Fujitsu DMATH systems used for WMT 2021 News Translation and Biomedical Translation tasks.
no code implementations • IJCNLP 2017 • An Nguyen Le, Ander Martinez, Akifumi Yoshimoto, Yuji Matsumoto
In order to assess the performance, we construct model based on an attention mechanism encoder-decoder model in which the source language is input to the encoder as a sequence and the decoder generates the target language as a linearized dependency tree structure.