Supervised and Unsupervised Neural Approaches to Text Readability

We present a set of novel neural supervised and unsupervised approaches for determining the readability of documents. In the unsupervised setting, we leverage neural language models, whereas in the supervised setting, three different neural classification architectures are tested. We show that the proposed neural unsupervised approach is robust, transferable across languages and allows adaptation to a specific readability task and data set. By systematic comparison of several neural architectures on a number of benchmark and new labelled readability datasets in two languages, this study also offers a comprehensive analysis of different neural approaches to readability classification. We expose their strengths and weaknesses, compare their performance to current state-of-the-art classification approaches to readability, which in most cases still rely on extensive feature engineering, and propose possibilities for improvements.

PDF Abstract CL (ACL) 2021 PDF CL (ACL) 2021 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Text Classification OneStopEnglish (Readability Assessment) HAN (Hierarchical Attention Network) Accuracy (5-fold) 0.787 # 3
Text Classification WeeBit (Readability Assessment) BERT Accuracy (5-fold) 0.857 # 2

Methods


No methods listed for this paper. Add relevant methods here