no code implementations • EACL 2021 • Ryuji Kano, Takumi Takahashi, Toru Nishino, Motoki Taniguchi, Tomoki Taniguchi, Tomoko Ohkuma
We conduct experiments on three summarization models; one pretrained model and two non-pretrained models, and verify our method improves the performance.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Toru Nishino, Ryota Ozaki, Yohei Momoki, Tomoki Taniguchi, Ryuji Kano, Norihisa Nakano, Yuki Tagawa, Motoki Taniguchi, Tomoko Ohkuma, Keigo Nakamura
We propose a novel reinforcement learning method with a reconstructor to improve the clinical correctness of generated reports to train the data-to-text module with a highly imbalanced dataset.
no code implementations • IJCNLP 2019 • Toru Nishino, Shotaro Misawa, Ryuji Kano, Tomoki Taniguchi, Yasuhide Miura, Tomoko Ohkuma
The results show that our model generates more consistent headlines, key phrases and categories.