Text Readability Assessment for Second Language Learners

WS 2016  ·  Menglin Xia, Ekaterina Kochmar, Ted Briscoe ·

This paper addresses the task of readability assessment for the texts aimed at second language (L2) learners. One of the major challenges in this task is the lack of significantly sized level-annotated data. For the present work, we collected a dataset of CEFR-graded texts tailored for learners of English as an L2 and investigated text readability assessment for both native and L2 learners. We applied a generalization method to adapt models trained on larger native corpora to estimate text readability for learners, and explored domain adaptation and self-learning techniques to make use of the native data to improve system performance on the limited L2 data. In our experiments, the best performing model for readability on learner texts achieves an accuracy of 0.797 and PCC of $0.938$.

PDF Abstract WS 2016 PDF WS 2016 Abstract

Datasets


  Add Datasets introduced or used in this paper
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Text Classification WeeBit (Readability Assessment) SVM (Support Vector Machine) with Handcrafted Features Accuracy (5-fold) 0.803 # 5

Methods


No methods listed for this paper. Add relevant methods here