Towards guaranteeing high-quality predictions, we present the first study of exploring the use of human-in-the-loop framework for minimizing the grading cost while guaranteeing the grading quality by allowing a SAS model to share the grading task with a human grader.
Natural language processing technology has rapidly improved automated grammatical error correction tasks, and the community begins to explore document-level revision as one of the next challenges.
We introduce a new task formulation of SAS that matches the actual usage.
Most existing SAG systems predict scores based only on the answers, including the model used as base line in this paper, which gives the-state-of-the-art performance.
The incorporation of pseudo data in the training of grammatical error correction models has been one of the main factors in improving the performance of such models.
Ranked #5 on Grammatical Error Correction on BEA-2019 (test)
We introduce the AIP-Tohoku grammatical error correction (GEC) system for the BEA-2019 shared task in Track 1 (Restricted Track) and Track 2 (Unrestricted Track) using the same system architecture.
This paper provides an analytical assessment of student short answer responses with a view to potential benefits in pedagogical contexts.
This study explores the necessity of performing cross-corpora evaluation for grammatical error correction (GEC) models.
Based on the discussion of possible causes of POS tagging errors in learner English, we show that deep neural models are particularly suitable for this.
Part-of-speech (POS) tagging and chunking have been used in tasks targeting learner English; however, to the best our knowledge, few studies have evaluated their performance and no studies have revealed the causes of POS-tagging/chunking errors in detail.
In grammatical error correction (GEC), automatically evaluating system outputs requires gold-standard references, which must be created manually and thus tend to be both expensive and limited in coverage.