no code implementations • SIGDIAL (ACL) 2021 • Koshiro Okano, Yu Suzuki, Masaya Kawamura, Tsuneo Kato, Akihiro Tamura, Jianming Wu
Responses generated by neural conversational models (NCMs) for non-task-oriented systems are difficult to evaluate.
no code implementations • BioNLP (ACL) 2022 • Taiki Watanabe, Tomoya Ichikawa, Akihiro Tamura, Tomoya Iwakura, Chunpeng Ma, Tsuneo Kato
As one of the NER improvement approaches, multi-task learning that learns a model from multiple training data has been used.
no code implementations • RANLP 2021 • Yuki Yano, Akihiro Tamura, Takashi Ninomiya, Hiroaki Obayashi
This study proposes an utterance position-aware approach for a neural network-based dialogue act recognition (DAR) model, which incorporates positional encoding for utterance’s absolute or relative position.
1 code implementation • LREC 2022 • Kazuki Tani, Ryoya Yuasa, Kazuki Takikawa, Akihiro Tamura, Tomoyuki Kajiwara, Takashi Ninomiya, Tsuneo Kato
Therefore, we create a benchmark test dataset for Japanese-to-English MLCC-MT from the Newsela corpus by introducing an automatic filtering of data with inappropriate sentence-level complexity, manual check for parallel target language sentences with different complexity levels, and manual translation.
no code implementations • ACL 2021 • Hiroyuki Deguchi, Akihiro Tamura, Takashi Ninomiya
This paper proposes a novel attention mechanism for Transformer Neural Machine Translation, {``}Synchronous Syntactic Attention,{''} inspired by synchronous dependency grammars.
no code implementations • NAACL 2021 • Kazuki Akiyama, Akihiro Tamura, Takashi Ninomiya
This paper proposes a new abstractive document summarization model, hierarchical BART (Hie-BART), which captures hierarchical structures of a document (i. e., sentence-word structures) in the BART model.
Ranked #7 on Document Summarization on CNN / Daily Mail
no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Yutaro Omote, Kyoumoto Matsushita, Tomoya Iwakura, Akihiro Tamura, Takashi Ninomiya
Instead of handcrafted rules, we propose Transformer-based models that predict SMILES strings from chemical compound names.
no code implementations • COLING 2020 • Hiroyuki Deguchi, Masao Utiyama, Akihiro Tamura, Takashi Ninomiya, Eiichiro Sumita
This paper proposed a new subword segmentation method for neural machine translation, {``}Bilingual Subword Segmentation,{''} which tokenizes sentences to minimize the difference between the number of subword units in a sentence and that of its translation.
no code implementations • COLING 2020 • Tetsuro Nishihara, Akihiro Tamura, Takashi Ninomiya, Yutaro Omote, Hideki Nakayama
This paper proposed a supervised visual attention mechanism for multimodal neural machine translation (MNMT), trained with constraints based on manual alignments between words in a sentence and their corresponding regions of an image.
no code implementations • LREC 2020 • Hideki Nakayama, Akihiro Tamura, Takashi Ninomiya
To verify our dataset, we performed phrase localization experiments in both languages and investigated the effectiveness of our Japanese annotations as well as multilingual learning realized by our dataset.
no code implementations • IJCNLP 2019 • Taiki Watanabe, Akihiro Tamura, Takashi Ninomiya, Takuya Makino, Tomoya Iwakura
We propose a method to improve named entity recognition (NER) for chemical compounds using multi-task learning by jointly training a chemical NER model and a chemical com- pound paraphrase model.
no code implementations • RANLP 2019 • Hiroyuki Deguchi, Akihiro Tamura, Takashi Ninomiya
In this paper, we propose a new Transformer neural machine translation (NMT) model that incorporates dependency relations into self-attention on both source and target sides, dependency-based self-attention.
no code implementations • RANLP 2019 • Yutaro Omote, Akihiro Tamura, Takashi Ninomiya
This paper proposes a new Transformer neural machine translation model that incorporates syntactic distances between two source words into the relative position representations of the self-attention mechanism.
no code implementations • NAACL 2019 • Chunpeng Ma, Akihiro Tamura, Masao Utiyama, Eiichiro Sumita, Tiejun Zhao
The explicit use of syntactic information has been proved useful for neural machine translation (NMT).
no code implementations • COLING 2018 • Arata Ugawa, Akihiro Tamura, Takashi Ninomiya, Hiroya Takamura, Manabu Okumura
To alleviate these problems, the encoder of the proposed model encodes the input word on the basis of its NE tag at each time step, which could reduce the ambiguity of the input word.
no code implementations • ACL 2018 • Chunpeng Ma, Akihiro Tamura, Masao Utiyama, Tiejun Zhao, Eiichiro Sumita
Tree-based neural machine translation (NMT) approaches, although achieved impressive performance, suffer from a major drawback: they only use the 1-best parse tree to direct the translation, which potentially introduces translation mistakes due to parsing errors.
no code implementations • IJCNLP 2017 • Taiki Watanabe, Akihiro Tamura, Takashi Ninomiya
This paper proposes a new attention mechanism for neural machine translation (NMT) based on convolutional neural networks (CNNs), which is inspired by the CKY algorithm.
no code implementations • EMNLP 2017 • Kehai Chen, Rui Wang, Masao Utiyama, Lemao Liu, Akihiro Tamura, Eiichiro Sumita, Tiejun Zhao
Source dependency information has been successfully introduced into statistical machine translation.