no code implementations • NAACL 2021 • Seiichiro Kondo, Kengo Hotate, Masahiro Kaneko, Mamoru Komachi
It is assumed that this issue is caused by insufficient number of long sentences in the training data.
no code implementations • NAACL 2021 • Aomi Koyama, Kengo Hotate, Masahiro Kaneko, Mamoru Komachi
Therefore, GEC studies have developed various methods to generate pseudo data, which comprise pairs of grammatical and artificially produced ungrammatical sentences.
no code implementations • COLING 2020 • Kengo Hotate, Masahiro Kaneko, Mamoru Komachi
In this study, we propose a beam search method to obtain diverse outputs in a local sequence transduction task where most of the tokens in the source and target sentences overlap, such as in grammatical error correction (GEC).
no code implementations • WS 2019 • Masahiro Kaneko, Kengo Hotate, Satoru Katsumata, Mamoru Komachi
Thus, it is not straightforward to utilize language representations trained from a large corpus, such as Bidirectional Encoder Representations from Transformers (BERT), in a form suitable for the learner{'}s grammatical errors.
no code implementations • ACL 2019 • Kengo Hotate, Masahiro Kaneko, Satoru Katsumata, Mamoru Komachi
In this paper, we propose a method for neural grammar error correction (GEC) that can control the degree of correction.