Search Results for author: Pranav Rajpurkar

Found 48 papers, 26 papers with code

Style-Aware Radiology Report Generation with RadGraph and Few-Shot Prompting

no code implementations26 Oct 2023 Benjamin Yan, Ruochen Liu, David E. Kuo, Subathra Adithan, Eduardo Pontes Reis, Stephen Kwak, Vasantha Kumar Venugopal, Chloe P. O'Connell, Agustina Saenz, Pranav Rajpurkar, Michael Moor

First, we extract the content from an image; then, we verbalize the extracted content into a report that matches the style of a specific radiologist.

Augmenting medical image classifiers with synthetic data from latent diffusion models

no code implementations23 Aug 2023 Luke W. Sagers, James A. Diao, Luke Melas-Kyriazi, Matthew Groh, Pranav Rajpurkar, Adewole S. Adamson, Veronica Rotemberg, Roxana Daneshjou, Arjun K. Manrai

While hundreds of artificial intelligence (AI) algorithms are now approved or cleared by the US Food and Drugs Administration (FDA), many studies have shown inconsistent generalization or latent bias, particularly for underrepresented populations.

Image Generation

RadGraph2: Modeling Disease Progression in Radiology Reports via Hierarchical Information Extraction

no code implementations9 Aug 2023 Sameer Khanna, Adam Dejl, Kibo Yoon, Quoc Hung Truong, Hanh Duong, Agustina Saenz, Pranav Rajpurkar

We present RadGraph2, a novel dataset for extracting information from radiology reports that focuses on capturing changes in disease state and device placement over time.

Relation Extraction

Med-Flamingo: a Multimodal Medical Few-shot Learner

1 code implementation27 Jul 2023 Michael Moor, Qian Huang, Shirley Wu, Michihiro Yasunaga, Cyril Zakka, Yash Dalmia, Eduardo Pontes Reis, Pranav Rajpurkar, Jure Leskovec

However, existing models typically have to be fine-tuned on sizeable down-stream datasets, which poses a significant limitation as in many medical applications data is scarce, necessitating models that are capable of learning from few examples in real-time.

Medical Visual Question Answering Question Answering +1

Improving Zero-Shot Detection of Low Prevalence Chest Pathologies using Domain Pre-trained Language Models

1 code implementation13 Jun 2023 Aakash Mishra, Rajat Mittal, Christy Jestin, Kostas Tingos, Pranav Rajpurkar

We hypothesize that domain pre-trained models such as CXR-BERT, BlueBERT, and ClinicalBERT offer the potential to improve the performance of CLIP-like models with specific domain knowledge by replacing BERT weights at the cost of breaking the original model's alignment.

Zero-Shot Learning

BenchMD: A Benchmark for Unified Learning on Medical Images and Sensors

1 code implementation17 Apr 2023 Kathryn Wantlin, Chenwei Wu, Shih-Cheng Huang, Oishi Banerjee, Farah Dadabhoy, Veeral Vipin Mehta, Ryan Wonhee Han, Fang Cao, Raja R. Narayan, Errol Colak, Adewole Adamson, Laura Heacock, Geoffrey H. Tison, Alex Tamkin, Pranav Rajpurkar

Finally, we evaluate performance on out-of-distribution data collected at different hospitals than the training data, representing naturally-occurring distribution shifts that frequently degrade the performance of medical AI models.

Self-Supervised Learning

Video Pretraining Advances 3D Deep Learning on Chest CT Tasks

1 code implementation2 Apr 2023 Alexander Ke, Shih-Cheng Huang, Chloe P O'Connell, Michal Klimont, Serena Yeung, Pranav Rajpurkar

We demonstrate video pretraining improves the average performance of seven 3D models on two chest CT datasets, regardless of finetuning dataset size, and that video pretraining allows 3D models to outperform 2D baselines.

Image Classification

Multimodal Image-Text Matching Improves Retrieval-based Chest X-Ray Report Generation

1 code implementation29 Mar 2023 Jaehwan Jeong, Katherine Tian, Andrew Li, Sina Hartung, Fardad Behzadi, Juan Calle, David Osayande, Michael Pohlen, Subathra Adithan, Pranav Rajpurkar

In this work, we propose Contrastive X-Ray REport Match (X-REM), a novel retrieval-based radiology report generation module that uses an image-text matching score to measure the similarity of a chest X-ray image and radiology report for report retrieval.

Image Captioning Image-text matching +2

Improving dermatology classifiers across populations using images generated by large diffusion models

no code implementations23 Nov 2022 Luke W. Sagers, James A. Diao, Matthew Groh, Pranav Rajpurkar, Adewole S. Adamson, Arjun K. Manrai

Dermatological classification algorithms developed without sufficiently diverse training data may generalize poorly across populations.

Improving Radiology Report Generation Systems by Removing Hallucinated References to Non-existent Priors

1 code implementation27 Sep 2022 Vignav Ramesh, Nathan Andrew Chi, Pranav Rajpurkar

Current deep learning models trained to generate radiology reports from chest radiographs are capable of producing clinically accurate, clear, and actionable text that can advance patient care.

token-classification Token Classification

Deep Learning-Based Sparse Whole-Slide Image Analysis for the Diagnosis of Gastric Intestinal Metaplasia

1 code implementation5 Jan 2022 Jon Braatz, Pranav Rajpurkar, Stephanie Zhang, Andrew Y. Ng, Jeanne Shen

We develop an evaluation framework inspired by the early classification literature, in order to quantify the tradeoff between diagnostic performance and inference time for sparse analytic approaches.

Q-Pain: A Question Answering Dataset to Measure Social Bias in Pain Management

no code implementations3 Aug 2021 Cécile Logé, Emily Ross, David Yaw Amoah Dadey, Saahil Jain, Adriel Saporta, Andrew Y. Ng, Pranav Rajpurkar

Recent advances in Natural Language Processing (NLP), and specifically automated Question Answering (QA) systems, have demonstrated both impressive linguistic fluency and a pernicious tendency to reflect social biases.

Decision Making Experimental Design +2

RadGraph: Extracting Clinical Entities and Relations from Radiology Reports

1 code implementation28 Jun 2021 Saahil Jain, Ashwin Agrawal, Adriel Saporta, Steven QH Truong, Du Nguyen Duong, Tan Bui, Pierre Chambon, Yuhao Zhang, Matthew P. Lungren, Andrew Y. Ng, Curtis P. Langlotz, Pranav Rajpurkar

We release a development dataset, which contains board-certified radiologist annotations for 500 radiology reports from the MIMIC-CXR dataset (14, 579 entities and 10, 889 relations), and a test dataset, which contains two independent sets of board-certified radiologist annotations for 100 radiology reports split equally across the MIMIC-CXR and CheXpert datasets.

Relation Extraction Test

Structured dataset documentation: a datasheet for CheXpert

1 code implementation7 May 2021 Christian Garbin, Pranav Rajpurkar, Jeremy Irvin, Matthew P. Lungren, Oge Marques

Following the structured format of Datasheets for Datasets, this paper expands on the original CheXpert paper and other sources to show the critical role played by radiologists in the creation of reliable labels and to describe the different aspects of the dataset composition in detail.

BIG-bench Machine Learning

3KG: Contrastive Learning of 12-Lead Electrocardiograms using Physiologically-Inspired Augmentations

no code implementations21 Apr 2021 Bryan Gopal, Ryan W. Han, Gautham Raghupathi, Andrew Y. Ng, Geoffrey H. Tison, Pranav Rajpurkar

We propose 3KG, a physiologically-inspired contrastive learning approach that generates views using 3D augmentations of the 12-lead electrocardiogram.

Contrastive Learning Time Series Analysis

Effect of Radiology Report Labeler Quality on Deep Learning Models for Chest X-Ray Interpretation

no code implementations1 Apr 2021 Saahil Jain, Akshay Smit, Andrew Y. Ng, Pranav Rajpurkar

Next, after training image classification models using labels generated from the different radiology report labelers on one of the largest datasets of chest X-rays, we show that an image classification model trained on labels from the VisualCheXbert labeler outperforms image classification models trained on labels from the CheXpert and CheXbert labelers.

Classification General Classification +1

MedSelect: Selective Labeling for Medical Image Classification Combining Meta-Learning with Deep Reinforcement Learning

1 code implementation26 Mar 2021 Akshay Smit, Damir Vrabac, Yujie He, Andrew Y. Ng, Andrew L. Beam, Pranav Rajpurkar

We propose a selective learning method using meta-learning and deep reinforcement learning for medical image interpretation in the setting of limited labeling resources.

General Classification Image Classification +3

CheXbreak: Misclassification Identification for Deep Learning Models Interpreting Chest X-rays

no code implementations18 Mar 2021 Emma Chen, Andy Kim, Rayan Krishnan, Jin Long, Andrew Y. Ng, Pranav Rajpurkar

A major obstacle to the integration of deep learning models for chest x-ray interpretation into clinical settings is the lack of understanding of their failure modes.

CheXseen: Unseen Disease Detection for Deep Learning Interpretation of Chest X-rays

no code implementations8 Mar 2021 Siyu Shi, Ishaan Malhi, Kevin Tran, Andrew Y. Ng, Pranav Rajpurkar

Second, we evaluate whether models trained on seen diseases can detect seen diseases when co-occurring with diseases outside the subset (unseen diseases).

VisualCheXbert: Addressing the Discrepancy Between Radiology Report Labels and Image Labels

1 code implementation23 Feb 2021 Saahil Jain, Akshay Smit, Steven QH Truong, Chanh DT Nguyen, Minh-Thanh Huynh, Mudit Jain, Victoria A. Young, Andrew Y. Ng, Matthew P. Lungren, Pranav Rajpurkar

We also find that VisualCheXbert better agrees with radiologists labeling chest X-ray images than do radiologists labeling the corresponding radiology reports by an average F1 score across several medical conditions of between 0. 12 (95% CI 0. 09, 0. 15) and 0. 21 (95% CI 0. 18, 0. 24).

CheXseg: Combining Expert Annotations with DNN-generated Saliency Maps for X-ray Segmentation

1 code implementation21 Feb 2021 Soham Gadgil, Mark Endo, Emily Wen, Andrew Y. Ng, Pranav Rajpurkar

Medical image segmentation models are typically supervised by expert annotations at the pixel-level, which can be expensive to acquire.

Image Segmentation Knowledge Distillation +3

MedAug: Contrastive learning leveraging patient metadata improves representations for chest X-ray interpretation

no code implementations21 Feb 2021 Yen Nhi Truong Vu, Richard Wang, Niranjan Balachandar, Can Liu, Andrew Y. Ng, Pranav Rajpurkar

Our controlled experiments show that the keys to improving downstream performance on disease classification are (1) using patient metadata to appropriately create positive pairs from different images with the same underlying pathologies, and (2) maximizing the number of different images used in query pairing.

Contrastive Learning

CheXternal: Generalization of Deep Learning Models for Chest X-ray Interpretation to Photos of Chest X-rays and External Clinical Settings

1 code implementation17 Feb 2021 Pranav Rajpurkar, Anirudh Joshi, Anuj Pareek, Andrew Y. Ng, Matthew P. Lungren

Recent advances in training deep learning models have demonstrated the potential to provide accurate chest X-ray interpretation and increase access to radiology expertise.

CheXtransfer: Performance and Parameter Efficiency of ImageNet Models for Chest X-Ray Interpretation

no code implementations18 Jan 2021 Alexander Ke, William Ellsworth, Oishi Banerjee, Andrew Y. Ng, Pranav Rajpurkar

First, we find no relationship between ImageNet performance and CheXpert performance for both models without pretraining and models with pretraining.

MoCo-Pretraining Improves Representations and Transferability of Chest X-ray Models

no code implementations1 Jan 2021 Hari Sowrirajan, Jing Bo Yang, Andrew Y. Ng, Pranav Rajpurkar

Using 0. 1% of labeled training data, we find that a linear model trained on MoCo-pretrained representations outperforms one trained on representations without MoCo-pretraining by an AUC of 0. 096 (95% CI 0. 061, 0. 130), indicating that MoCo-pretrained representations are of higher quality.

Image Classification Transfer Learning

CheXphotogenic: Generalization of Deep Learning Models for Chest X-ray Interpretation to Photos of Chest X-rays

no code implementations12 Nov 2020 Pranav Rajpurkar, Anirudh Joshi, Anuj Pareek, Jeremy Irvin, Andrew Y. Ng, Matthew Lungren

In this study, we measured the diagnostic performance for 8 different chest x-ray models when applied to photos of chest x-rays.

MoCo-CXR: MoCo Pretraining Improves Representation and Transferability of Chest X-ray Models

2 code implementations11 Oct 2020 Hari Sowrirajan, Jingbo Yang, Andrew Y. Ng, Pranav Rajpurkar

In this work, we propose MoCo-CXR, which is an adaptation of the contrastive learning method Momentum Contrast (MoCo), to produce models with better representations and initializations for the detection of pathologies in chest X-rays.

Contrastive Learning Image Classification +1

CheXpedition: Investigating Generalization Challenges for Translation of Chest X-Ray Algorithms to the Clinical Setting

no code implementations26 Feb 2020 Pranav Rajpurkar, Anirudh Joshi, Anuj Pareek, Phil Chen, Amirhossein Kiani, Jeremy Irvin, Andrew Y. Ng, Matthew P. Lungren

First, we find that the top 10 chest x-ray models on the CheXpert competition achieve an average AUC of 0. 851 on the task of detecting TB on two public TB datasets without fine-tuning or including the TB labels in training data.


Know What You Don't Know: Unanswerable Questions for SQuAD

12 code implementations ACL 2018 Pranav Rajpurkar, Robin Jia, Percy Liang

Extractive reading comprehension systems can often locate the correct answer to a question in a context document, but they also tend to make unreliable guesses on questions for which the correct answer is not stated in the context.

Natural Language Understanding Question Answering +1

MURA: Large Dataset for Abnormality Detection in Musculoskeletal Radiographs

11 code implementations11 Dec 2017 Pranav Rajpurkar, Jeremy Irvin, Aarti Bagul, Daisy Ding, Tony Duan, Hershel Mehta, Brandon Yang, Kaylie Zhu, Dillon Laird, Robyn L. Ball, Curtis Langlotz, Katie Shpanskaya, Matthew P. Lungren, Andrew Y. Ng

To evaluate models robustly and to get an estimate of radiologist performance, we collect additional labels from six board-certified Stanford radiologists on the test set, consisting of 207 musculoskeletal studies.

Anomaly Detection Specificity +1

Malaria Likelihood Prediction By Effectively Surveying Households Using Deep Reinforcement Learning

no code implementations25 Nov 2017 Pranav Rajpurkar, Vinaya Polamreddi, Anusha Balakrishnan

We build a deep reinforcement learning (RL) agent that can predict the likelihood of an individual testing positive for malaria by asking questions about their household.

Holdout Set reinforcement-learning +1

Cardiologist-Level Arrhythmia Detection with Convolutional Neural Networks

7 code implementations6 Jul 2017 Pranav Rajpurkar, Awni Y. Hannun, Masoumeh Haghpanahi, Codie Bourn, Andrew Y. Ng

We develop an algorithm which exceeds the performance of board certified cardiologists in detecting a wide range of heart arrhythmias from electrocardiograms recorded with a single-lead wearable monitor.

Arrhythmia Detection Test

SQuAD: 100,000+ Questions for Machine Comprehension of Text

19 code implementations EMNLP 2016 Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang

We present the Stanford Question Answering Dataset (SQuAD), a new reading comprehension dataset consisting of 100, 000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage.

Question Answering Reading Comprehension +1

Augur: Mining Human Behaviors from Fiction to Power Interactive Systems

no code implementations22 Feb 2016 Ethan Fast, William McGrath, Pranav Rajpurkar, Michael Bernstein

From smart homes that prepare coffee when we wake, to phones that know not to interrupt us during important conversations, our collective visions of HCI imagine a future in which computers understand a broad range of human behaviors.

Driverseat: Crowdstrapping Learning Tasks for Autonomous Driving

no code implementations7 Dec 2015 Pranav Rajpurkar, Toki Migimatsu, Jeff Kiske, Royce Cheng-Yue, Sameep Tandon, Tao Wang, Andrew Ng

While emerging deep-learning systems have outclassed knowledge-based approaches in many tasks, their application to detection tasks for autonomous technologies remains an open field for scientific exploration.

Autonomous Driving Lane Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.