Search Results for author: Junji Tomita

Found 16 papers, 0 papers with code

Investigating the Effect of Conveying Understanding Results in Chat-Oriented Dialogue Systems

no code implementations IJCNLP 2017 Koh Mitsuda, Ryuichiro Higashinaka, Junji Tomita

In this paper, we explored the effect of conveying understanding results of user utterances in a chat-oriented dialogue system by an experiment using human subjects.

Automatically Extracting Variant-Normalization Pairs for Japanese Text Normalization

no code implementations IJCNLP 2017 Itsumi Saito, Kyosuke Nishida, Kugatsu Sadamitsu, Kuniko Saito, Junji Tomita

Social media texts, such as tweets from Twitter, contain many types of non-standard tokens, and the number of normalization approaches for handling such noisy text has been increasing.

Machine Translation Morphological Analysis

Retrieve-and-Read: Multi-task Learning of Information Retrieval and Reading Comprehension

no code implementations31 Aug 2018 Kyosuke Nishida, Itsumi Saito, Atsushi Otsuka, Hisako Asano, Junji Tomita

Previous MRS studies, in which the IR component was trained without considering answer spans, struggled to accurately find a small number of relevant passages from a large set of passages.

Information Retrieval Multi-Task Learning +2

Commonsense Knowledge Base Completion and Generation

no code implementations CONLL 2018 Itsumi Saito, Kyosuke Nishida, Hisako Asano, Junji Tomita

To improve the accuracy of CKB completion and expand the size of CKBs, we formulate a new commonsense knowledge base generation task (CKB generation) and propose a joint learning method that incorporates both CKB completion and CKB generation.

Knowledge Base Completion Question Answering +1

Multi-style Generative Reading Comprehension

no code implementations ACL 2019 Kyosuke Nishida, Itsumi Saito, Kosuke Nishida, Kazutoshi Shinoda, Atsushi Otsuka, Hisako Asano, Junji Tomita

Second, whereas previous studies built a specific model for each answer style because of the difficulty of acquiring one general model, our approach learns multi-style answers within a model to improve the NLG capability for all styles involved.

Abstractive Text Summarization Question Answering +2

Length-controllable Abstractive Summarization by Guiding with Summary Prototype

no code implementations21 Jan 2020 Itsumi Saito, Kyosuke Nishida, Kosuke Nishida, Atsushi Otsuka, Hisako Asano, Junji Tomita, Hiroyuki Shindo, Yuji Matsumoto

Unlike the previous models, our length-controllable abstractive summarization model incorporates a word-level extractive module in the encoder-decoder model instead of length embeddings.

Abstractive Text Summarization

Abstractive Summarization with Combination of Pre-trained Sequence-to-Sequence and Saliency Models

no code implementations29 Mar 2020 Itsumi Saito, Kyosuke Nishida, Kosuke Nishida, Junji Tomita

Experimental results showed that most of the combination models outperformed a simple fine-tuned seq-to-seq model on both the CNN/DM and XSum datasets even if the seq-to-seq model is pre-trained on large-scale corpora.

Abstractive Text Summarization Text Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.