Search Results for author: Eric Lehman

Found 9 papers, 5 papers with code

From Sparse to Dense: GPT-4 Summarization with Chain of Density Prompting

no code implementations8 Sep 2023 Griffin Adams, Alexander Fabbri, Faisal Ladhak, Eric Lehman, Noémie Elhadad

We conduct a human preference study on 100 CNN DailyMail articles and find that that humans prefer GPT-4 summaries that are more dense than those generated by a vanilla prompt and almost as dense as human written summaries.

Informativeness

Do We Still Need Clinical Language Models?

no code implementations16 Feb 2023 Eric Lehman, Evan Hernandez, Diwakar Mahajan, Jonas Wulff, Micah J. Smith, Zachary Ziegler, Daniel Nadler, Peter Szolovits, Alistair Johnson, Emily Alsentzer

To investigate this question, we conduct an extensive empirical analysis of 12 language models, ranging from 220M to 175B parameters, measuring their performance on 3 different clinical tasks that test their ability to parse and reason over electronic health records.

In-Context Learning

Does BERT Pretrained on Clinical Notes Reveal Sensitive Data?

4 code implementations NAACL 2021 Eric Lehman, Sarthak Jain, Karl Pichotta, Yoav Goldberg, Byron C. Wallace

The cost of training such models (and the necessity of data access to do so) coupled with their utility motivates parameter sharing, i. e., the release of pretrained models such as ClinicalBERT.

Understanding Clinical Trial Reports: Extracting Medical Entities and Their Relations

no code implementations7 Oct 2020 Benjamin E. Nye, Jay DeYoung, Eric Lehman, Ani Nenkova, Iain J. Marshall, Byron C. Wallace

Here we consider the end-to-end task of both (a) extracting treatments and outcomes from full-text articles describing clinical trials (entity identification) and, (b) inferring the reported results for the former with respect to the latter (relation extraction).

Decision Making Relation Extraction

Evidence Inference 2.0: More Data, Better Models

1 code implementation WS 2020 Jay DeYoung, Eric Lehman, Ben Nye, Iain J. Marshall, Byron C. Wallace

Ideally, we could consult a database of evidence gleaned from clinical trials to answer such questions.

ERASER: A Benchmark to Evaluate Rationalized NLP Models

2 code implementations ACL 2020 Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, Byron C. Wallace

We propose several metrics that aim to capture how well the rationales provided by models align with human rationales, and also how faithful these rationales are (i. e., the degree to which provided rationales influenced the corresponding predictions).

Inferring Which Medical Treatments Work from Reports of Clinical Trials

2 code implementations NAACL 2019 Eric Lehman, Jay DeYoung, Regina Barzilay, Byron C. Wallace

In this paper, we present a new task and corpus for making this unstructured evidence actionable.

Cannot find the paper you are looking for? You can Submit a new open access paper.