no code implementations • 29 Mar 2020 • Itsumi Saito, Kyosuke Nishida, Kosuke Nishida, Junji Tomita
Experimental results showed that most of the combination models outperformed a simple fine-tuned seq-to-seq model on both the CNN/DM and XSum datasets even if the seq-to-seq model is pre-trained on large-scale corpora.
no code implementations • 21 Jan 2020 • Itsumi Saito, Kyosuke Nishida, Kosuke Nishida, Atsushi Otsuka, Hisako Asano, Junji Tomita, Hiroyuki Shindo, Yuji Matsumoto
Unlike the previous models, our length-controllable abstractive summarization model incorporates a word-level extractive module in the encoder-decoder model instead of length embeddings.
no code implementations • LREC 2020 • Kosuke Nishida, Kyosuke Nishida, Itsumi Saito, Hisako Asano, Junji Tomita
The second one is the proposed model that uses a multi-task learning approach of LM and RC.
no code implementations • WS 2019 • Yasuhito Ohsugi, Itsumi Saito, Kyosuke Nishida, Hisako Asano, Junji Tomita
Conversational machine comprehension (CMC) requires understanding the context of multi-turn dialogue.
no code implementations • ACL 2019 • Kosuke Nishida, Kyosuke Nishida, Masaaki Nagata, Atsushi Otsuka, Itsumi Saito, Hisako Asano, Junji Tomita
It enables QFE to consider the dependency among the evidence sentences and cover important information in the question sentence.
Ranked #64 on
Question Answering
on HotpotQA
no code implementations • ACL 2019 • Kyosuke Nishida, Itsumi Saito, Kosuke Nishida, Kazutoshi Shinoda, Atsushi Otsuka, Hisako Asano, Junji Tomita
Second, whereas previous studies built a specific model for each answer style because of the difficulty of acquiring one general model, our approach learns multi-style answers within a model to improve the NLG capability for all styles involved.
Ranked #1 on
Question Answering
on NarrativeQA
(using extra training data)
no code implementations • CONLL 2018 • Itsumi Saito, Kyosuke Nishida, Hisako Asano, Junji Tomita
To improve the accuracy of CKB completion and expand the size of CKBs, we formulate a new commonsense knowledge base generation task (CKB generation) and propose a joint learning method that incorporates both CKB completion and CKB generation.
no code implementations • 31 Aug 2018 • Kyosuke Nishida, Itsumi Saito, Atsushi Otsuka, Hisako Asano, Junji Tomita
Previous MRS studies, in which the IR component was trained without considering answer spans, struggled to accurately find a small number of relevant passages from a large set of passages.
no code implementations • WS 2018 • Kazuki Sakai, Ryuichiro Higashinaka, Yuichiro Yoshikawa, Hiroshi Ishiguro, Junji Tomita
The results suggest that inserting the question-answer dialogue enhances familiarity and naturalness.
no code implementations • WS 2018 • Ryuichiro Higashinaka, Masahiro Mizukami, Hidetoshi Kawabata, Emi Yamaguchi, Noritake Adachi, Junji Tomita
Having consistent personalities is important for chatbots if we want them to be believable.
no code implementations • WS 2018 • Kosuke Nishida, Kyosuke Nishida, Hisako Asano, Junji Tomita
Natural language inference (NLI) is one of the most important tasks in NLP.
no code implementations • IJCNLP 2017 • Itsumi Saito, Kyosuke Nishida, Kugatsu Sadamitsu, Kuniko Saito, Junji Tomita
Social media texts, such as tweets from Twitter, contain many types of non-standard tokens, and the number of normalization approaches for handling such noisy text has been increasing.
no code implementations • IJCNLP 2017 • Koh Mitsuda, Ryuichiro Higashinaka, Junji Tomita
In this paper, we explored the effect of conveying understanding results of user utterances in a chat-oriented dialogue system by an experiment using human subjects.
no code implementations • IJCNLP 2017 • Itsumi Saito, Jun Suzuki, Kyosuke Nishida, Kugatsu Sadamitsu, Satoshi Kobashikawa, Ryo Masumura, Yuji Matsumoto, Junji Tomita
In this study, we investigated the effectiveness of augmented data for encoder-decoder-based neural normalization models.