Search Results for author: Andrea Horbach

Found 25 papers, 3 papers with code

‘Meet me at the ribary’ – Acceptability of spelling variants in free-text answers to listening comprehension prompts

no code implementations NAACL (BEA) 2022 Ronja Laarmann-Quante, Leska Schwarz, Andrea Horbach, Torsten Zesch

When listening comprehension is tested as a free-text production task, a challenge for scoring the answers is the resulting wide range of spelling variants.

C-Test Collector: A Proficiency Testing Application to Collect Training Data for C-Tests

no code implementations EACL (BEA) 2021 Christian Haring, Rene Lehmann, Andrea Horbach, Torsten Zesch

We present the C-Test Collector, a web-based tool that allows language learners to test their proficiency level using c-tests.

Implicit Phenomena in Short-answer Scoring Data

no code implementations ACL (unimplicit) 2021 Marie Bexte, Andrea Horbach, Torsten Zesch

We therefore quantify to what extent implicit language phenomena occur in short answer datasets and examine the influence they have on automatic scoring performance.

Word Embeddings

Similarity-Based Content Scoring - How to Make S-BERT Keep Up With BERT

1 code implementation NAACL (BEA) 2022 Marie Bexte, Andrea Horbach, Torsten Zesch

The dominating paradigm for content scoring is to learn an instance-based model, i. e. to use lexical features derived from the learner answers themselves.

LeSpell - A Multi-Lingual Benchmark Corpus of Spelling Errors to Develop Spellchecking Methods for Learner Language

1 code implementation LREC 2022 Marie Bexte, Ronja Laarmann-Quante, Andrea Horbach, Torsten Zesch

Spellchecking text written by language learners is especially challenging because errors made by learners differ both quantitatively and qualitatively from errors made by already proficient learners.

Chinese Content Scoring: Open-Access Datasets and Features on Different Segmentation Levels

no code implementations Asian Chapter of the Association for Computational Linguistics 2020 Yuning Ding, Andrea Horbach, Torsten Zesch

As a review of prior work for Chinese content scoring shows a lack of open-access data in the field, we present two short-answer data sets for Chinese.

Linguistic Appropriateness and Pedagogic Usefulness of Reading Comprehension Questions

no code implementations LREC 2020 Andrea Horbach, Itziar Aldabe, Marie Bexte, Oier Lopez de Lacalle, Montse Maritxalar

Automatic generation of reading comprehension questions is a topic receiving growing interest in the NLP community, but there is currently no consensus on evaluation metrics and many approaches focus on linguistic quality only while ignoring the pedagogic value and appropriateness of questions.

Reading Comprehension

Cross-Lingual Content Scoring

no code implementations WS 2018 Andrea Horbach, Sebastian Stennmanns, Torsten Zesch

We investigate the feasibility of cross-lingual content scoring, a scenario where training and test data in an automatic scoring task are from two different languages.

Machine Translation

The Influence of Spelling Errors on Content Scoring Performance

no code implementations WS 2017 Andrea Horbach, Yuning Ding, Torsten Zesch

Spelling errors occur frequently in educational settings, but their influence on automatic scoring is largely unknown.

BIG-bench Machine Learning

Fine-grained essay scoring of a complex writing task for native speakers

no code implementations WS 2017 Andrea Horbach, Dirk Scholten-Akoun, Yuning Ding, Torsten Zesch

Automatic essay scoring is nowadays successfully used even in high-stakes tests, but this is mainly limited to holistic scoring of learner essays.

Unsupervised Ranked Cross-Lingual Lexical Substitution for Low-Resource Languages

no code implementations LREC 2016 Stefan Ecker, Andrea Horbach, Stefan Thater

We propose an unsupervised system for a variant of cross-lingual lexical substitution (CLLS) to be used in a reading scenario in computer-assisted language learning (CALL), in which single-word translations provided by a dictionary are ranked according to their appropriateness in context.

Machine Translation Translation

Finding a Tradeoff between Accuracy and Rater's Workload in Grading Clustered Short Answers

no code implementations LREC 2014 Andrea Horbach, Alexis Palmer, Magdalena Wolska

n this paper we investigate the potential of answer clustering for semi-automatic scoring of short answer questions for German as a foreign language.


Cannot find the paper you are looking for? You can Submit a new open access paper.