Automated Essay Scoring
26 papers with code • 1 benchmarks • 1 datasets
Essay scoring: Automated Essay Scoring is the task of assigning a score to an essay, usually in the context of assessing the language ability of a language learner. The quality of an essay is affected by the following four primary dimensions: topic relevance, organization and coherence, word usage and sentence complexity, and grammar and mechanics.
Source: A Joint Model for Multimodal Document Quality Assessment
Latest papers with no code
Graded Relevance Scoring of Written Essays with Dense Retrieval
While holistic essay scoring research is prevalent, a noticeable gap exists in scoring essays for specific quality traits.
Can GPT-4 do L2 analytic assessment?
Automated essay scoring (AES) to evaluate second language (L2) proficiency has been a firmly established technology used in educational contexts for decades.
Prompting Large Language Models for Zero-shot Essay Scoring via Multi-trait Specialization
Then, an LLM is prompted to extract trait scores from several conversational rounds, each round scoring one of the traits based on the scoring criteria.
Transformer-based Joint Modelling for Automatic Essay Scoring and Off-Topic Detection
The proposed Automated Open Essay Scoring (AOES) model uses a novel topic regularization module (TRM), which can be attached on top of a transformer model, and is trained using a proposed hybrid loss function.
Frustratingly Simple Prompting-based Text Denoising
This paper introduces a novel perspective on the automated essay scoring (AES) task, challenging the conventional view of the ASAP dataset as a static entity.
Empirical Study of Large Language Models as Automated Essay Scoring Tools in English Composition__Taking TOEFL Independent Writing Task for Example
The primary objective is to assess the capabilities and constraints of ChatGPT, a prominent representative of large language models, within the context of automated essay scoring.
Enhancing Essay Scoring with Adversarial Weights Perturbation and Metric-specific AttentionPooling
To address the specific needs of ELLs, we propose the use of DeBERTa, a state-of-the-art neural language model, for improving automated feedback tools.
FABRIC: Automated Scoring and Feedback Generation for Essays
The second component is CASE, a Corruption-based Augmentation Strategy for Essays, with which we can improve the accuracy of the baseline model by 45. 44%.
Rubric-Specific Approach to Automated Essay Scoring with Augmentation Training
Neural based approaches to automatic evaluation of subjective responses have shown superior performance and efficiency compared to traditional rule-based and feature engineering oriented solutions.
Automated Essay Scoring in Argumentative Writing: DeBERTeachingAssistant
Automated Essay scoring has been explored as a research and industry problem for over 50 years.