Search Results for author: Itsumi Saito

Found 14 papers, 1 papers with code

SlideVQA: A Dataset for Document Visual Question Answering on Multiple Images

1 code implementation12 Jan 2023 Ryota Tanaka, Kyosuke Nishida, Kosuke Nishida, Taku Hasegawa, Itsumi Saito, Kuniko Saito

Visual question answering on document images that contain textual, visual, and layout information, called document VQA, has received much attention recently.

Evidence Selection Question Answering +1

Towards Interpretable and Reliable Reading Comprehension: A Pipeline Model with Unanswerability Prediction

no code implementations17 Nov 2021 Kosuke Nishida, Kyosuke Nishida, Itsumi Saito, Sen Yoshida

In this study, we define an interpretable reading comprehension (IRC) model as a pipeline model with the capability of predicting unanswerable queries.

Reading Comprehension

Abstractive Summarization with Combination of Pre-trained Sequence-to-Sequence and Saliency Models

no code implementations29 Mar 2020 Itsumi Saito, Kyosuke Nishida, Kosuke Nishida, Junji Tomita

Experimental results showed that most of the combination models outperformed a simple fine-tuned seq-to-seq model on both the CNN/DM and XSum datasets even if the seq-to-seq model is pre-trained on large-scale corpora.

Abstractive Text Summarization Text Generation

Length-controllable Abstractive Summarization by Guiding with Summary Prototype

no code implementations21 Jan 2020 Itsumi Saito, Kyosuke Nishida, Kosuke Nishida, Atsushi Otsuka, Hisako Asano, Junji Tomita, Hiroyuki Shindo, Yuji Matsumoto

Unlike the previous models, our length-controllable abstractive summarization model incorporates a word-level extractive module in the encoder-decoder model instead of length embeddings.

Abstractive Text Summarization

Multi-style Generative Reading Comprehension

no code implementations ACL 2019 Kyosuke Nishida, Itsumi Saito, Kosuke Nishida, Kazutoshi Shinoda, Atsushi Otsuka, Hisako Asano, Junji Tomita

Second, whereas previous studies built a specific model for each answer style because of the difficulty of acquiring one general model, our approach learns multi-style answers within a model to improve the NLG capability for all styles involved.

Abstractive Text Summarization Question Answering +2

Commonsense Knowledge Base Completion and Generation

no code implementations CONLL 2018 Itsumi Saito, Kyosuke Nishida, Hisako Asano, Junji Tomita

To improve the accuracy of CKB completion and expand the size of CKBs, we formulate a new commonsense knowledge base generation task (CKB generation) and propose a joint learning method that incorporates both CKB completion and CKB generation.

Knowledge Base Completion Question Answering +1

Retrieve-and-Read: Multi-task Learning of Information Retrieval and Reading Comprehension

no code implementations31 Aug 2018 Kyosuke Nishida, Itsumi Saito, Atsushi Otsuka, Hisako Asano, Junji Tomita

Previous MRS studies, in which the IR component was trained without considering answer spans, struggled to accurately find a small number of relevant passages from a large set of passages.

Information Retrieval Multi-Task Learning +2

Automatically Extracting Variant-Normalization Pairs for Japanese Text Normalization

no code implementations IJCNLP 2017 Itsumi Saito, Kyosuke Nishida, Kugatsu Sadamitsu, Kuniko Saito, Junji Tomita

Social media texts, such as tweets from Twitter, contain many types of non-standard tokens, and the number of normalization approaches for handling such noisy text has been increasing.

Machine Translation Morphological Analysis

Cannot find the paper you are looking for? You can Submit a new open access paper.