1 code implementation • COLING (ArgMining) 2020 • Lily Ng, Anne Lauscher, Joel Tetreault, Courtney Napoles
Computational models of argument quality (AQ) have focused primarily on assessing the overall quality or just one specific characteristic of an argument, such as its convincingness or its clarity.
1 code implementation • COLING 2020 • Anne Lauscher, Lily Ng, Courtney Napoles, Joel Tetreault
Though preceding work in computational argument quality (AQ) mostly focuses on assessing overall AQ, researchers agree that writers would benefit from feedback targeting individual dimensions of argumentation theory.
no code implementations • TACL 2019 • Courtney Napoles, Maria N{\u{a}}dejde, Joel Tetreault
Until now, grammatical error correction (GEC) has been primarily evaluated on text written by non-native English speakers, with a focus on student essays.
no code implementations • WS 2018 • Junchao Zheng, Courtney Napoles, Joel Tetreault, Kostiantyn Omelianchuk
Run-on sentences are common grammatical mistakes but little research has tackled this problem to date.
1 code implementation • WS 2017 • Courtney Napoles, Chris Callison-Burch
Our model rivals the current state of the art using a fraction of the training data.
no code implementations • WS 2017 • Keisuke Sakaguchi, Courtney Napoles, Joel Tetreault
The field of grammatical error correction (GEC) has made tremendous bounds in the last ten years, but new questions and obstacles are revealing themselves.
1 code implementation • WS 2017 • Courtney Napoles, Joel Tetreault, Aasish Pappu, Enrica Rosato, Brian Provenzale
This work presents a dataset and annotation scheme for the new task of identifying {``}good{''} conversations that occur online, which we call ERICs: Engaging, Respectful, and/or Informative Conversations.
1 code implementation • EACL 2017 • Courtney Napoles, Keisuke Sakaguchi, Joel Tetreault
We present a new parallel corpus, JHU FLuency-Extended GUG corpus (JFLEG) for developing and evaluating grammatical error correction (GEC).
1 code implementation • EMNLP 2016 • Courtney Napoles, Keisuke Sakaguchi, Joel Tetreault
We show that reference-less grammaticality metrics correlate very strongly with human judgments and are competitive with the leading reference-based evaluation metrics.
1 code implementation • 9 May 2016 • Courtney Napoles, Keisuke Sakaguchi, Matt Post, Joel Tetreault
The GLEU metric was proposed for evaluating grammatical error corrections using n-gram overlap with a set of reference sentences, as opposed to precision/recall of specific annotated errors (Napoles et al., 2015).
1 code implementation • TACL 2016 • Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, Chris Callison-Burch
Most recent sentence simplification systems use basic machine translation models to learn lexical and syntactic paraphrases from a manually simplified parallel corpus.
Ranked #7 on
Text Simplification
on TurkCorpus
1 code implementation • TACL 2016 • Keisuke Sakaguchi, Courtney Napoles, Matt Post, Joel Tetreault
The field of grammatical error correction (GEC) has grown substantially in recent years, with research directed at both evaluation metrics and improved system performance against those metrics.
no code implementations • TACL 2015 • Wei Xu, Chris Callison-Burch, Courtney Napoles
Simple Wikipedia has dominated simplification research in the past 5 years.