Search Results for author: Erin Bransom

Found 7 papers, 4 papers with code

Personalized Jargon Identification for Enhanced Interdisciplinary Communication

no code implementations16 Nov 2023 Yue Guo, Joseph Chee Chang, Maria Antoniak, Erin Bransom, Trevor Cohen, Lucy Lu Wang, Tal August

We collect a dataset of over 10K term familiarity annotations from 11 computer science researchers for terms drawn from 100 paper abstracts.

CARE: Extracting Experimental Findings From Clinical Literature

no code implementations16 Nov 2023 Aakanksha Naik, Bailey Kuehl, Erin Bransom, Doug Downey, Tom Hope

Focusing on biomedicine, this work presents CARE -- a new IE dataset for the task of extracting clinical findings.

Relation Extraction

ARIES: A Corpus of Scientific Paper Edits Made in Response to Peer Reviews

1 code implementation21 Jun 2023 Mike D'Arcy, Alexis Ross, Erin Bransom, Bailey Kuehl, Jonathan Bragg, Tom Hope, Doug Downey

Revising scientific papers based on peer feedback is a challenging task that requires not only deep scientific knowledge and reasoning, but also the ability to recognize the implicit requests in high-level feedback and to choose the best of many possible ways to update the manuscript in response.

Automated Metrics for Medical Multi-Document Summarization Disagree with Human Evaluations

1 code implementation23 May 2023 Lucy Lu Wang, Yulia Otmakhova, Jay DeYoung, Thinh Hung Truong, Bailey E. Kuehl, Erin Bransom, Byron C. Wallace

We analyze how automated summarization evaluation metrics correlate with lexical features of generated summaries, to other automated metrics including several we propose in this work, and to aspects of human-assessed summary quality.

Document Summarization Multi-Document Summarization

S2abEL: A Dataset for Entity Linking from Scientific Tables

1 code implementation30 Apr 2023 Yuze Lou, Bailey Kuehl, Erin Bransom, Sergey Feldman, Aakanksha Naik, Doug Downey

Entity linking (EL) is the task of linking a textual mention to its corresponding entry in a knowledge base, and is critical for many knowledge-intensive NLP applications.

Entity Linking Question Answering

LongEval: Guidelines for Human Evaluation of Faithfulness in Long-form Summarization

1 code implementation30 Jan 2023 Kalpesh Krishna, Erin Bransom, Bailey Kuehl, Mohit Iyyer, Pradeep Dasigi, Arman Cohan, Kyle Lo

Motivated by our survey, we present LongEval, a set of guidelines for human evaluation of faithfulness in long-form summaries that addresses the following challenges: (1) How can we achieve high inter-annotator agreement on faithfulness scores?

Cannot find the paper you are looking for? You can Submit a new open access paper.