Search Results for author: Masaru Kitsuregawa

Found 18 papers, 4 papers with code

Speculative Sampling in Variational Autoencoders for Dialogue Response Generation

1 code implementation Findings (EMNLP) 2021 Shoetsu Sato, Naoki Yoshinaga, Masashi Toyoda, Masaru Kitsuregawa

Our method chooses the most probable one from redundantly sampled latent variables for tying up the variable with a given response.

Response Generation

Building Large-Scale Japanese Pronunciation-Annotated Corpora for Reading Heteronymous Logograms

no code implementations LREC 2022 Fumikazu Sato, Naoki Yoshinaga, Masaru Kitsuregawa

In this study, to improve the accuracy of pronunciation prediction, we construct two large-scale Japanese corpora that annotate kanji characters with their pronunciations.

Sentence

Domain Adaptive Multiple Instance Learning for Instance-level Prediction of Pathological Images

1 code implementation7 Apr 2023 Shusuke Takahama, Yusuke Kurose, Yusuke Mukuta, Hiroyuki Abe, Akihiko Yoshizawa, Tetsuo Ushiku, Masashi Fukayama, Masanobu Kitagawa, Masaru Kitsuregawa, Tatsuya Harada

We conducted experiments on the pathological image dataset we created for this study and showed that the proposed method significantly improves the classification performance compared to existing methods.

Domain Adaptation Multiple Instance Learning

Vocabulary Adaptation for Domain Adaptation in Neural Machine Translation

1 code implementation Findings of the Association for Computational Linguistics 2020 Shoetsu Sato, Jin Sakuma, Naoki Yoshinaga, Masashi Toyoda, Masaru Kitsuregawa

Prior to fine-tuning, our method replaces the embedding layers of the NMT model by projecting general word embeddings induced from monolingual data in a target domain onto a source-domain embedding space.

Domain Adaptation Machine Translation +3

Robust Backed-off Estimation of Out-of-Vocabulary Embeddings

no code implementations Findings of the Association for Computational Linguistics 2020 Nobukazu Fukuda, Naoki Yoshinaga, Masaru Kitsuregawa

In this study, inspired by the processes for creating words from known words, we propose a robust method of estimating oov word embeddings by referring to pre-trained word embeddings for known words with similar surfaces to target oov words.

Word Embeddings Word Similarity

Vocabulary Adaptation for Distant Domain Adaptation in Neural Machine Translation

no code implementations30 Apr 2020 Shoetsu Sato, Jin Sakuma, Naoki Yoshinaga, Masashi Toyoda, Masaru Kitsuregawa

Prior to fine-tuning, our method replaces the embedding layers of the NMT model by projecting general word embeddings induced from monolingual data in a target domain onto a source-domain embedding space.

Domain Adaptation Machine Translation +3

Learning to Describe Unknown Phrases with Local and Global Contexts

no code implementations NAACL 2019 Shonosuke Ishiwatari, Hiroaki Hayashi, Naoki Yoshinaga, Graham Neubig, Shoetsu Sato, Masashi Toyoda, Masaru Kitsuregawa

When reading a text, it is common to become stuck on unfamiliar words and phrases, such as polysemous words with novel senses, rarely used idioms, internet slang, or emerging entities.

Learning to Describe Phrases with Local and Global Contexts

1 code implementation1 Nov 2018 Shonosuke Ishiwatari, Hiroaki Hayashi, Naoki Yoshinaga, Graham Neubig, Shoetsu Sato, Masashi Toyoda, Masaru Kitsuregawa

When reading a text, it is common to become stuck on unfamiliar words and phrases, such as polysemous words with novel senses, rarely used idioms, internet slang, or emerging entities.

Reading Comprehension

Cannot find the paper you are looking for? You can Submit a new open access paper.