Evaluation of large-scale synthetic data for Grammar Error Correction

31 Oct 2022  ·  Vanya Bannihatti Kumar ·

Grammar Error Correction(GEC) mainly relies on the availability of high quality of large amount of synthetic parallel data of grammatically correct and erroneous sentence pairs. The quality of the synthetic data is evaluated on how well the GEC system performs when pre-trained using it. But this does not provide much insight into what are the necessary factors which define the quality of these data. So this work aims to introduce 3 metrics - reliability, diversity and distribution match to provide more insight into the quality of large-scale synthetic data generated for the GEC task, as well as automatically evaluate them. Evaluating these three metrics automatically can also help in providing feedback to the data generation systems and thereby improve the quality of the synthetic data generated dynamically

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here