no code implementations • EACL 2021 • Ryuji Kano, Takumi Takahashi, Toru Nishino, Motoki Taniguchi, Tomoki Taniguchi, Tomoko Ohkuma
We conduct experiments on three summarization models; one pretrained model and two non-pretrained models, and verify our method improves the performance.
no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Ryuji Kano, Yasuhide Miura, Tomoki Taniguchi, Tomoko Ohkuma
The training task of the model is to predict whether a reply candidate is a true reply to a post.
Extractive Summarization Unsupervised Extractive Summarization
no code implementations • Findings of the Association for Computational Linguistics 2020 • Toru Nishino, Ryota Ozaki, Yohei Momoki, Tomoki Taniguchi, Ryuji Kano, Norihisa Nakano, Yuki Tagawa, Motoki Taniguchi, Tomoko Ohkuma, Keigo Nakamura
We propose a novel reinforcement learning method with a reconstructor to improve the clinical correctness of generated reports to train the data-to-text module with a highly imbalanced dataset.
no code implementations • IJCNLP 2019 • Toru Nishino, Shotaro Misawa, Ryuji Kano, Tomoki Taniguchi, Yasuhide Miura, Tomoko Ohkuma
The results show that our model generates more consistent headlines, key phrases and categories.
no code implementations • EMNLP 2018 • Ryuji Kano, Yasuhide Miura, Motoki Taniguchi, Yan-Ying Chen, Francine Chen, Tomoko Ohkuma
We leverage a popularity measure in social media as a distant label for extractive summarization of online conversations.
no code implementations • COLING 2018 • Yasuhide Miura, Ryuji Kano, Motoki Taniguchi, Tomoki Taniguchi, Shotaro Misawa, Tomoko Ohkuma
We proposed a model that integrates discussion structures with neural networks to classify discourse acts.