no code implementations • COLING 2020 • Tetsuro Nishihara, Akihiro Tamura, Takashi Ninomiya, Yutaro Omote, Hideki Nakayama
This paper proposed a supervised visual attention mechanism for multimodal neural machine translation (MNMT), trained with constraints based on manual alignments between words in a sentence and their corresponding regions of an image.
no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Yutaro Omote, Kyoumoto Matsushita, Tomoya Iwakura, Akihiro Tamura, Takashi Ninomiya
Instead of handcrafted rules, we propose Transformer-based models that predict SMILES strings from chemical compound names.
no code implementations • RANLP 2019 • Yutaro Omote, Akihiro Tamura, Takashi Ninomiya
This paper proposes a new Transformer neural machine translation model that incorporates syntactic distances between two source words into the relative position representations of the self-attention mechanism.