Grammatical Error Detection Using Error- and Grammaticality-Specific Word Embeddings

In this study, we improve grammatical error detection by learning word embeddings that consider grammaticality and error patterns. Most existing algorithms for learning word embeddings usually model only the syntactic context of words so that classifiers treat erroneous and correct words as similar inputs. We address the problem of contextual information by considering learner errors. Specifically, we propose two models: one model that employs grammatical error patterns and another model that considers grammaticality of the target word. We determine grammaticality of n-gram sequence from the annotated error tags and extract grammatical error patterns for word embeddings from large-scale learner corpora. Experimental results show that a bidirectional long-short term memory model initialized by our word embeddings achieved the state-of-the-art accuracy by a large margin in an English grammatical error detection task on the First Certificate in English dataset.

PDF Abstract IJCNLP 2017 PDF IJCNLP 2017 Abstract


Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Grammatical Error Detection FCE Bi-LSTM+ Error- and Grammaticality-Specific Word Embeddings F0.5 44.6 # 6


No methods listed for this paper. Add relevant methods here