Search Results for author: Lucy Lu Wang

Found 23 papers, 18 papers with code

Automated Metrics for Medical Multi-Document Summarization Disagree with Human Evaluations

1 code implementation23 May 2023 Lucy Lu Wang, Yulia Otmakhova, Jay DeYoung, Thinh Hung Truong, Bailey E. Kuehl, Erin Bransom, Byron C. Wallace

We analyze how automated summarization evaluation metrics correlate with lexical features of generated summaries, to other automated metrics including several we propose in this work, and to aspects of human-assessed summary quality.

Document Summarization Multi-Document Summarization

APPLS: A Meta-evaluation Testbed for Plain Language Summarization

1 code implementation23 May 2023 Yue Guo, Tal August, Gondy Leroy, Trevor Cohen, Lucy Lu Wang

Our research contributes the first meta-evaluation testbed for PLS and a comprehensive evaluation of existing metrics, offering insights with relevance to other text generation tasks.

Informativeness Text Generation +1

SciFact-Open: Towards open-domain scientific claim verification

1 code implementation25 Oct 2022 David Wadden, Kyle Lo, Bailey Kuehl, Arman Cohan, Iz Beltagy, Lucy Lu Wang, Hannaneh Hajishirzi

While research on scientific claim verification has led to the development of powerful systems that appear to approach human performance, these approaches have yet to be tested in a realistic setting against large corpora of scientific literature.

Claim Verification Information Retrieval +1

Generating Scientific Claims for Zero-Shot Scientific Fact Checking

1 code implementation ACL 2022 Dustin Wright, David Wadden, Kyle Lo, Bailey Kuehl, Arman Cohan, Isabelle Augenstein, Lucy Lu Wang

To address this challenge, we propose scientific claim generation, the task of generating one or more atomic and verifiable claims from scientific sentences, and demonstrate its usefulness in zero-shot fact checking for biomedical claims.

Fact Checking

Paper Plain: Making Medical Research Papers Approachable to Healthcare Consumers with Natural Language Processing

1 code implementation28 Feb 2022 Tal August, Lucy Lu Wang, Jonathan Bragg, Marti A. Hearst, Andrew Head, Kyle Lo

When seeking information not covered in patient-friendly documents, like medical pamphlets, healthcare consumers may turn to the research literature.

MultiVerS: Improving scientific claim verification with weak supervision and full-document context

2 code implementations Findings (NAACL) 2022 David Wadden, Kyle Lo, Lucy Lu Wang, Arman Cohan, Iz Beltagy, Hannaneh Hajishirzi

Our approach outperforms two competitive baselines on three scientific claim verification datasets, with particularly strong performance in zero / few-shot domain adaptation experiments.

Claim Verification Domain Adaptation +1

Literature-Augmented Clinical Outcome Prediction

1 code implementation Findings (NAACL) 2022 Aakanksha Naik, Sravanthi Parasa, Sergey Feldman, Lucy Lu Wang, Tom Hope

We present BEEP (Biomedical Evidence-Enhanced Predictions), a novel approach for clinical outcome prediction that retrieves patient-specific medical literature and incorporates it into predictive models.

Decision Making

VILA: Improving Structured Content Extraction from Scientific PDFs Using Visual Layout Groups

1 code implementation1 Jun 2021 Zejiang Shen, Kyle Lo, Lucy Lu Wang, Bailey Kuehl, Daniel S. Weld, Doug Downey

Experiments are conducted on a newly curated evaluation suite, S2-VLUE, that unifies existing automatically-labeled datasets and includes a new dataset of manual annotations covering diverse papers from 19 scientific disciplines.

Language Modelling Text Classification +2

Searching for Scientific Evidence in a Pandemic: An Overview of TREC-COVID

no code implementations19 Apr 2021 Kirk Roberts, Tasmeer Alam, Steven Bedrick, Dina Demner-Fushman, Kyle Lo, Ian Soboroff, Ellen Voorhees, Lucy Lu Wang, William R Hersh

We present an overview of the TREC-COVID Challenge, an information retrieval (IR) shared task to evaluate search on scientific literature related to COVID-19.

Information Retrieval Retrieval

MS2: Multi-Document Summarization of Medical Studies

2 code implementations13 Apr 2021 Jay DeYoung, Iz Beltagy, Madeleine van Zuylen, Bailey Kuehl, Lucy Lu Wang

In support of this goal, we release MS^2 (Multi-Document Summarization of Medical Studies), a dataset of over 470k documents and 20k summaries derived from the scientific literature.

Document Summarization Multi-Document Summarization

TREC-COVID: Constructing a Pandemic Information Retrieval Test Collection

no code implementations9 May 2020 Ellen Voorhees, Tasmeer Alam, Steven Bedrick, Dina Demner-Fushman, William R Hersh, Kyle Lo, Kirk Roberts, Ian Soboroff, Lucy Lu Wang

TREC-COVID is a community evaluation designed to build a test collection that captures the information needs of biomedical researchers using the scientific literature during a pandemic.

Information Retrieval Retrieval

Fact or Fiction: Verifying Scientific Claims

2 code implementations EMNLP 2020 David Wadden, Shanchuan Lin, Kyle Lo, Lucy Lu Wang, Madeleine van Zuylen, Arman Cohan, Hannaneh Hajishirzi

We introduce scientific claim verification, a new task to select abstracts from the research literature containing evidence that SUPPORTS or REFUTES a given scientific claim, and to identify rationales justifying each decision.

Claim Verification Domain Adaptation +1

S2ORC: The Semantic Scholar Open Research Corpus

2 code implementations ACL 2020 Kyle Lo, Lucy Lu Wang, Mark Neumann, Rodney Kinney, Dan S. Weld

We introduce S2ORC, a large corpus of 81. 1M English-language academic papers spanning many academic disciplines.

Language Modelling

SUPP.AI: Finding Evidence for Supplement-Drug Interactions

1 code implementation ACL 2020 Lucy Lu Wang, Oyvind Tafjord, Arman Cohan, Sarthak Jain, Sam Skjonsberg, Carissa Schoenick, Nick Botner, Waleed Ammar

We fine-tune the contextualized word representations of the RoBERTa language model using labeled DDI data, and apply the fine-tuned model to identify supplement interactions.

General Classification Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.