Search Results for author: Courtney Napoles

Found 19 papers, 9 papers with code

Creating a Domain-diverse Corpus for Theory-based Argument Quality Assessment

1 code implementation COLING (ArgMining) 2020 Lily Ng, Anne Lauscher, Joel Tetreault, Courtney Napoles

Computational models of argument quality (AQ) have focused primarily on assessing the overall quality or just one specific characteristic of an argument, such as its convincingness or its clarity.

Rhetoric, Logic, and Dialectic: Advancing Theory-based Argument Quality Assessment in Natural Language Processing

1 code implementation COLING 2020 Anne Lauscher, Lily Ng, Courtney Napoles, Joel Tetreault

Though preceding work in computational argument quality (AQ) mostly focuses on assessing overall AQ, researchers agree that writers would benefit from feedback targeting individual dimensions of argumentation theory.

Enabling Robust Grammatical Error Correction in New Domains: Data Sets, Metrics, and Analyses

no code implementations TACL 2019 Courtney Napoles, Maria N{\u{a}}dejde, Joel Tetreault

Until now, grammatical error correction (GEC) has been primarily evaluated on text written by non-native English speakers, with a focus on student essays.

Grammatical Error Correction

GEC into the future: Where are we going and how do we get there?

no code implementations WS 2017 Keisuke Sakaguchi, Courtney Napoles, Joel Tetreault

The field of grammatical error correction (GEC) has made tremendous bounds in the last ten years, but new questions and obstacles are revealing themselves.

Grammatical Error Correction Machine Translation

Finding Good Conversations Online: The Yahoo News Annotated Comments Corpus

1 code implementation WS 2017 Courtney Napoles, Joel Tetreault, Aasish Pappu, Enrica Rosato, Brian Provenzale

This work presents a dataset and annotation scheme for the new task of identifying {``}good{''} conversations that occur online, which we call ERICs: Engaging, Respectful, and/or Informative Conversations.

JFLEG: A Fluency Corpus and Benchmark for Grammatical Error Correction

1 code implementation EACL 2017 Courtney Napoles, Keisuke Sakaguchi, Joel Tetreault

We present a new parallel corpus, JHU FLuency-Extended GUG corpus (JFLEG) for developing and evaluating grammatical error correction (GEC).

Grammatical Error Correction

There's No Comparison: Reference-less Evaluation Metrics in Grammatical Error Correction

1 code implementation EMNLP 2016 Courtney Napoles, Keisuke Sakaguchi, Joel Tetreault

We show that reference-less grammaticality metrics correlate very strongly with human judgments and are competitive with the leading reference-based evaluation metrics.

Benchmarking Grammatical Error Correction

GLEU Without Tuning

1 code implementation9 May 2016 Courtney Napoles, Keisuke Sakaguchi, Matt Post, Joel Tetreault

The GLEU metric was proposed for evaluating grammatical error corrections using n-gram overlap with a set of reference sentences, as opposed to precision/recall of specific annotated errors (Napoles et al., 2015).

Optimizing Statistical Machine Translation for Text Simplification

1 code implementation TACL 2016 Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, Chris Callison-Burch

Most recent sentence simplification systems use basic machine translation models to learn lexical and syntactic paraphrases from a manually simplified parallel corpus.

Machine Translation Text Simplification +1

Reassessing the Goals of Grammatical Error Correction: Fluency Instead of Grammaticality

1 code implementation TACL 2016 Keisuke Sakaguchi, Courtney Napoles, Matt Post, Joel Tetreault

The field of grammatical error correction (GEC) has grown substantially in recent years, with research directed at both evaluation metrics and improved system performance against those metrics.

Grammatical Error Correction

Cannot find the paper you are looking for? You can Submit a new open access paper.