no code implementations • IJCNLP 2017 • An Nguyen Le, Ander Martinez, Akifumi Yoshimoto, Yuji Matsumoto
In order to assess the performance, we construct model based on an attention mechanism encoder-decoder model in which the source language is input to the encoder as a sequence and the decoder generates the target language as a linearized dependency tree structure.
no code implementations • WS 2016 • Ayaka Morimoto, Akifumi Yoshimoto, Akihiko Kato, Hiroyuki Shindo, Yuji Matsumoto
This paper presents our ongoing work on compilation of English multi-word expression (MWE) lexicon.