Auxiliary Objectives for Neural Error Detection Models

WS 2017  ·  Marek Rei, Helen Yannakoudakis ·

We investigate the utility of different auxiliary objectives and training strategies within a neural sequence labeling approach to error detection in learner writing. Auxiliary costs provide the model with additional linguistic information, allowing it to learn general-purpose compositional features that can then be exploited for other objectives. Our experiments show that a joint learning approach trained with parallel labels on in-domain data improves performance over the previous best error detection system. While the resulting model has the same number of parameters, the additional objectives allow it to be optimised more efficiently and achieve better performance.

PDF Abstract WS 2017 PDF WS 2017 Abstract
No code implementations yet. Submit your code now
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Grammatical Error Detection CoNLL-2014 A1 Bi-LSTM + POS (trained on FCE) F0.5 17.5 # 7
Grammatical Error Detection CoNLL-2014 A1 Bi-LSTM + POS (unrestricted data) F0.5 36.1 # 2
Grammatical Error Detection CoNLL-2014 A2 Bi-LSTM + POS (trained on FCE) F0.5 26.2 # 6
Grammatical Error Detection CoNLL-2014 A2 Bi-LSTM + POS (unrestricted data) F0.5 45.1 # 2
Grammatical Error Detection FCE Bi-LSTM + err POS GR F0.5 47.7 # 5

Methods


No methods listed for this paper. Add relevant methods here