no code implementations • 23 Mar 2020 • Koichiro Yoshino, Kohei Wakimoto, Yuta Nishimura, Satoshi Nakamura
Two reasons make it challenging to apply existing sequence-to-sequence models to this mapping: 1) it is hard to prepare a large-scale dataset for any kind of robots and their environment, and 2) there is a gap between the number of samples obtained from robot action observations and generated word sequences of captions.
no code implementations • IWSLT (EMNLP) 2018 • Yuta Nishimura, Katsuhito Sudoh, Graham Neubig, Satoshi Nakamura
By using information from these multiple sources, these systems achieve large gains in accuracy.
no code implementations • WS 2018 • Yuta Nishimura, Katsuhito Sudoh, Graham Neubig, Satoshi Nakamura
This study focuses on the use of incomplete multilingual corpora in multi-encoder NMT and mixture of NMT experts and examines a very simple implementation where missing source translations are replaced by a special symbol <NULL>.