Search Results for author: Akihiro Tamura

Found 25 papers, 1 papers with code

Utterance Position-Aware Dialogue Act Recognition

no code implementations RANLP 2021 Yuki Yano, Akihiro Tamura, Takashi Ninomiya, Hiroaki Obayashi

This study proposes an utterance position-aware approach for a neural network-based dialogue act recognition (DAR) model, which incorporates positional encoding for utterance’s absolute or relative position.

Position

A Benchmark Dataset for Multi-Level Complexity-Controllable Machine Translation

1 code implementation LREC 2022 Kazuki Tani, Ryoya Yuasa, Kazuki Takikawa, Akihiro Tamura, Tomoyuki Kajiwara, Takashi Ninomiya, Tsuneo Kato

Therefore, we create a benchmark test dataset for Japanese-to-English MLCC-MT from the Newsela corpus by introducing an automatic filtering of data with inappropriate sentence-level complexity, manual check for parallel target language sentences with different complexity levels, and manual translation.

Machine Translation NMT +2

Synchronous Syntactic Attention for Transformer Neural Machine Translation

no code implementations ACL 2021 Hiroyuki Deguchi, Akihiro Tamura, Takashi Ninomiya

This paper proposes a novel attention mechanism for Transformer Neural Machine Translation, {``}Synchronous Syntactic Attention,{''} inspired by synchronous dependency grammars.

Machine Translation Translation

Hie-BART: Document Summarization with Hierarchical BART

no code implementations NAACL 2021 Kazuki Akiyama, Akihiro Tamura, Takashi Ninomiya

This paper proposes a new abstractive document summarization model, hierarchical BART (Hie-BART), which captures hierarchical structures of a document (i. e., sentence-word structures) in the BART model.

Document Summarization Machine Translation +2

Bilingual Subword Segmentation for Neural Machine Translation

no code implementations COLING 2020 Hiroyuki Deguchi, Masao Utiyama, Akihiro Tamura, Takashi Ninomiya, Eiichiro Sumita

This paper proposed a new subword segmentation method for neural machine translation, {``}Bilingual Subword Segmentation,{''} which tokenizes sentences to minimize the difference between the number of subword units in a sentence and that of its translation.

Machine Translation Segmentation +2

Supervised Visual Attention for Multimodal Neural Machine Translation

no code implementations COLING 2020 Tetsuro Nishihara, Akihiro Tamura, Takashi Ninomiya, Yutaro Omote, Hideki Nakayama

This paper proposed a supervised visual attention mechanism for multimodal neural machine translation (MNMT), trained with constraints based on manual alignments between words in a sentence and their corresponding regions of an image.

Machine Translation Sentence +1

A Visually-Grounded Parallel Corpus with Phrase-to-Region Linking

no code implementations LREC 2020 Hideki Nakayama, Akihiro Tamura, Takashi Ninomiya

To verify our dataset, we performed phrase localization experiments in both languages and investigated the effectiveness of our Japanese annotations as well as multilingual learning realized by our dataset.

Image Captioning Multimodal Machine Translation +1

Multi-Task Learning for Chemical Named Entity Recognition with Chemical Compound Paraphrasing

no code implementations IJCNLP 2019 Taiki Watanabe, Akihiro Tamura, Takashi Ninomiya, Takuya Makino, Tomoya Iwakura

We propose a method to improve named entity recognition (NER) for chemical compounds using multi-task learning by jointly training a chemical NER model and a chemical com- pound paraphrase model.

Multi-Task Learning named-entity-recognition +2

Dependency-Based Self-Attention for Transformer NMT

no code implementations RANLP 2019 Hiroyuki Deguchi, Akihiro Tamura, Takashi Ninomiya

In this paper, we propose a new Transformer neural machine translation (NMT) model that incorporates dependency relations into self-attention on both source and target sides, dependency-based self-attention.

Machine Translation NMT +2

Dependency-Based Relative Positional Encoding for Transformer NMT

no code implementations RANLP 2019 Yutaro Omote, Akihiro Tamura, Takashi Ninomiya

This paper proposes a new Transformer neural machine translation model that incorporates syntactic distances between two source words into the relative position representations of the self-attention mechanism.

Machine Translation NMT +2

Neural Machine Translation Incorporating Named Entity

no code implementations COLING 2018 Arata Ugawa, Akihiro Tamura, Takashi Ninomiya, Hiroya Takamura, Manabu Okumura

To alleviate these problems, the encoder of the proposed model encodes the input word on the basis of its NE tag at each time step, which could reduce the ambiguity of the input word.

Machine Translation NMT +3

Forest-Based Neural Machine Translation

no code implementations ACL 2018 Chunpeng Ma, Akihiro Tamura, Masao Utiyama, Tiejun Zhao, Eiichiro Sumita

Tree-based neural machine translation (NMT) approaches, although achieved impressive performance, suffer from a major drawback: they only use the 1-best parse tree to direct the translation, which potentially introduces translation mistakes due to parsing errors.

Machine Translation NMT +1

CKY-based Convolutional Attention for Neural Machine Translation

no code implementations IJCNLP 2017 Taiki Watanabe, Akihiro Tamura, Takashi Ninomiya

This paper proposes a new attention mechanism for neural machine translation (NMT) based on convolutional neural networks (CNNs), which is inspired by the CKY algorithm.

Machine Translation NMT +2

Cannot find the paper you are looking for? You can Submit a new open access paper.