A Neural Grammatical Error Correction System Built On Better Pre-training and Sequential Transfer Learning

WS 2019  ·  Yo Joong Choe, Jiyeon Ham, Kyubyong Park, Yeoil Yoon ·

Grammatical error correction can be viewed as a low-resource sequence-to-sequence task, because publicly available parallel corpora are limited. To tackle this challenge, we first generate erroneous versions of large unannotated corpora using a realistic noising function. The resulting parallel corpora are subsequently used to pre-train Transformer models. Then, by sequentially applying transfer learning, we adapt these models to the domain and style of the test set. Combined with a context-aware neural spellchecker, our system achieves competitive results in both restricted and low resource tracks in ACL 2019 BEA Shared Task. We release all of our code and materials for reproducibility.

PDF Abstract WS 2019 PDF WS 2019 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Grammatical Error Correction BEA-2019 (test) Transformer F0.5 69.0 # 14

Methods