Search Results for author: Takashi Ninomiya

Found 21 papers, 3 papers with code

Adversarial Training on Disentangling Meaning and Language Representations for Unsupervised Quality Estimation

no code implementations COLING 2022 Yuto Kuroda, Tomoyuki Kajiwara, Yuki Arase, Takashi Ninomiya

We propose a method to distill language-agnostic meaning embeddings from multilingual sentence encoders for unsupervised quality estimation of machine translation.

Machine Translation Sentence +1

Utterance Position-Aware Dialogue Act Recognition

no code implementations RANLP 2021 Yuki Yano, Akihiro Tamura, Takashi Ninomiya, Hiroaki Obayashi

This study proposes an utterance position-aware approach for a neural network-based dialogue act recognition (DAR) model, which incorporates positional encoding for utterance’s absolute or relative position.

Position

A Benchmark Dataset for Multi-Level Complexity-Controllable Machine Translation

1 code implementation LREC 2022 Kazuki Tani, Ryoya Yuasa, Kazuki Takikawa, Akihiro Tamura, Tomoyuki Kajiwara, Takashi Ninomiya, Tsuneo Kato

Therefore, we create a benchmark test dataset for Japanese-to-English MLCC-MT from the Newsela corpus by introducing an automatic filtering of data with inappropriate sentence-level complexity, manual check for parallel target language sentences with different complexity levels, and manual translation.

Machine Translation NMT +2

Unsupervised Translation Quality Estimation Exploiting Synthetic Data and Pre-trained Multilingual Encoder

no code implementations9 Nov 2023 Yuto Kuroda, Atsushi Fujita, Tomoyuki Kajiwara, Takashi Ninomiya

In this paper, we extensively investigate the usefulness of synthetic TQE data and pre-trained multilingual encoders in unsupervised sentence-level TQE, both of which have been proven effective in the supervised training scenarios.

Sentence Translation

Synchronous Syntactic Attention for Transformer Neural Machine Translation

no code implementations ACL 2021 Hiroyuki Deguchi, Akihiro Tamura, Takashi Ninomiya

This paper proposes a novel attention mechanism for Transformer Neural Machine Translation, {``}Synchronous Syntactic Attention,{''} inspired by synchronous dependency grammars.

Machine Translation Translation

Hie-BART: Document Summarization with Hierarchical BART

no code implementations NAACL 2021 Kazuki Akiyama, Akihiro Tamura, Takashi Ninomiya

This paper proposes a new abstractive document summarization model, hierarchical BART (Hie-BART), which captures hierarchical structures of a document (i. e., sentence-word structures) in the BART model.

Document Summarization Machine Translation +2

Neural Text Generation with Artificial Negative Examples

no code implementations28 Dec 2020 Keisuke Shirai, Kazuma Hashimoto, Akiko Eriguchi, Takashi Ninomiya, Shinsuke Mori

In this paper, we propose to suppress an arbitrary type of errors by training the text generation model in a reinforcement learning framework, where we use a trainable reward function that is capable of discriminating between references and sentences containing the targeted type of errors.

Image Captioning Machine Translation +2

Supervised Visual Attention for Multimodal Neural Machine Translation

no code implementations COLING 2020 Tetsuro Nishihara, Akihiro Tamura, Takashi Ninomiya, Yutaro Omote, Hideki Nakayama

This paper proposed a supervised visual attention mechanism for multimodal neural machine translation (MNMT), trained with constraints based on manual alignments between words in a sentence and their corresponding regions of an image.

Machine Translation Sentence +1

Bilingual Subword Segmentation for Neural Machine Translation

no code implementations COLING 2020 Hiroyuki Deguchi, Masao Utiyama, Akihiro Tamura, Takashi Ninomiya, Eiichiro Sumita

This paper proposed a new subword segmentation method for neural machine translation, {``}Bilingual Subword Segmentation,{''} which tokenizes sentences to minimize the difference between the number of subword units in a sentence and that of its translation.

Machine Translation Segmentation +2

A Visually-Grounded Parallel Corpus with Phrase-to-Region Linking

no code implementations LREC 2020 Hideki Nakayama, Akihiro Tamura, Takashi Ninomiya

To verify our dataset, we performed phrase localization experiments in both languages and investigated the effectiveness of our Japanese annotations as well as multilingual learning realized by our dataset.

Image Captioning Multimodal Machine Translation +1

Multi-Task Learning for Chemical Named Entity Recognition with Chemical Compound Paraphrasing

no code implementations IJCNLP 2019 Taiki Watanabe, Akihiro Tamura, Takashi Ninomiya, Takuya Makino, Tomoya Iwakura

We propose a method to improve named entity recognition (NER) for chemical compounds using multi-task learning by jointly training a chemical NER model and a chemical com- pound paraphrase model.

Multi-Task Learning named-entity-recognition +2

Dependency-Based Relative Positional Encoding for Transformer NMT

no code implementations RANLP 2019 Yutaro Omote, Akihiro Tamura, Takashi Ninomiya

This paper proposes a new Transformer neural machine translation model that incorporates syntactic distances between two source words into the relative position representations of the self-attention mechanism.

Machine Translation NMT +2

Dependency-Based Self-Attention for Transformer NMT

no code implementations RANLP 2019 Hiroyuki Deguchi, Akihiro Tamura, Takashi Ninomiya

In this paper, we propose a new Transformer neural machine translation (NMT) model that incorporates dependency relations into self-attention on both source and target sides, dependency-based self-attention.

Machine Translation NMT +2

Neural Machine Translation Incorporating Named Entity

no code implementations COLING 2018 Arata Ugawa, Akihiro Tamura, Takashi Ninomiya, Hiroya Takamura, Manabu Okumura

To alleviate these problems, the encoder of the proposed model encodes the input word on the basis of its NE tag at each time step, which could reduce the ambiguity of the input word.

Machine Translation NMT +3

CKY-based Convolutional Attention for Neural Machine Translation

no code implementations IJCNLP 2017 Taiki Watanabe, Akihiro Tamura, Takashi Ninomiya

This paper proposes a new attention mechanism for neural machine translation (NMT) based on convolutional neural networks (CNNs), which is inspired by the CKY algorithm.

Machine Translation NMT +2

Cannot find the paper you are looking for? You can Submit a new open access paper.