no code implementations • LREC 2012 • Takenobu Tokunaga, Ryu Iida, Asuka Terai, Naoko Kuriyama
In this respect, we succeeded in constructing a collection of corpora that included a variety of characteristics by changing the configurations for each set of dialogues, as originally planned.
no code implementations • LREC 2014 • Ryu Iida, Takenobu Tokunaga
This paper presents building a corpus of manually revised texts which includes both before and after-revision information.
no code implementations • ACL 2019 • Jong-Hoon Oh, Kazuma Kadowaki, Julien Kloetzer, Ryu Iida, Kentaro Torisawa
In this paper, we propose a method for why-question answering (why-QA) that uses an adversarial learning framework.
no code implementations • IJCNLP 2019 • Kazuma Kadowaki, Ryu Iida, Kentaro Torisawa, Jong-Hoon Oh, Julien Kloetzer
Furthermore, we investigate the effect of supplying background knowledge to our classifiers.
1 code implementation • ACL 2021 • Jong-Hoon Oh, Ryu Iida, Julien Kloetzer, Kentaro Torisawa
We show that on the GLUE tasks, the combination of our pretrained CNN with ALBERT outperforms the original ALBERT and achieves a similar performance to that of SOTA.