Search Results for author: Andrew Y. Ng

Found 59 papers, 24 papers with code

Q-Pain: A Question Answering Dataset to Measure Social Bias in Pain Management

no code implementations3 Aug 2021 Cécile Logé, Emily Ross, David Yaw Amoah Dadey, Saahil Jain, Adriel Saporta, Andrew Y. Ng, Pranav Rajpurkar

Recent advances in Natural Language Processing (NLP), and specifically automated Question Answering (QA) systems, have demonstrated both impressive linguistic fluency and a pernicious tendency to reflect social biases.

Decision Making Experimental Design +1

RadGraph: Extracting Clinical Entities and Relations from Radiology Reports

no code implementations28 Jun 2021 Saahil Jain, Ashwin Agrawal, Adriel Saporta, Steven QH Truong, Du Nguyen Duong, Tan Bui, Pierre Chambon, Yuhao Zhang, Matthew P. Lungren, Andrew Y. Ng, Curtis P. Langlotz, Pranav Rajpurkar

We release a development dataset, which contains board-certified radiologist annotations for 500 radiology reports from the MIMIC-CXR dataset (14, 579 entities and 10, 889 relations), and a test dataset, which contains two independent sets of board-certified radiologist annotations for 100 radiology reports split equally across the MIMIC-CXR and CheXpert datasets.

Relation Extraction

Learning Neighborhood Representation from Multi-Modal Multi-Graph: Image, Text, Mobility Graph and Beyond

no code implementations6 May 2021 Tianyuan Huang, Zhecheng Wang, Hao Sheng, Andrew Y. Ng, Ram Rajagopal

Recent urbanization has coincided with the enrichment of geotagged data, such as street view and point-of-interest (POI).

3KG: Contrastive Learning of 12-Lead Electrocardiograms using Physiologically-Inspired Augmentations

no code implementations21 Apr 2021 Bryan Gopal, Ryan W. Han, Gautham Raghupathi, Andrew Y. Ng, Geoffrey H. Tison, Pranav Rajpurkar

We propose 3KG, a physiologically-inspired contrastive learning approach that generates views using 3D augmentations of the 12-lead electrocardiogram.

Contrastive Learning Fine-tuning +1

Effect of Radiology Report Labeler Quality on Deep Learning Models for Chest X-Ray Interpretation

no code implementations1 Apr 2021 Saahil Jain, Akshay Smit, Andrew Y. Ng, Pranav Rajpurkar

Next, after training image classification models using labels generated from the different radiology report labelers on one of the largest datasets of chest X-rays, we show that an image classification model trained on labels from the VisualCheXbert labeler outperforms image classification models trained on labels from the CheXpert and CheXbert labelers.

Classification General Classification +1

MedSelect: Selective Labeling for Medical Image Classification Combining Meta-Learning with Deep Reinforcement Learning

1 code implementation26 Mar 2021 Akshay Smit, Damir Vrabac, Yujie He, Andrew Y. Ng, Andrew L. Beam, Pranav Rajpurkar

We propose a selective learning method using meta-learning and deep reinforcement learning for medical image interpretation in the setting of limited labeling resources.

General Classification Image Classification +1

CheXbreak: Misclassification Identification for Deep Learning Models Interpreting Chest X-rays

no code implementations18 Mar 2021 Emma Chen, Andy Kim, Rayan Krishnan, Jin Long, Andrew Y. Ng, Pranav Rajpurkar

A major obstacle to the integration of deep learning models for chest x-ray interpretation into clinical settings is the lack of understanding of their failure modes.

CheXseen: Unseen Disease Detection for Deep Learning Interpretation of Chest X-rays

no code implementations8 Mar 2021 Siyu Shi, Ishaan Malhi, Kevin Tran, Andrew Y. Ng, Pranav Rajpurkar

Second, we evaluate whether models trained on seen diseases can detect seen diseases when co-occurring with diseases outside the subset (unseen diseases).

VisualCheXbert: Addressing the Discrepancy Between Radiology Report Labels and Image Labels

1 code implementation23 Feb 2021 Saahil Jain, Akshay Smit, Steven QH Truong, Chanh DT Nguyen, Minh-Thanh Huynh, Mudit Jain, Victoria A. Young, Andrew Y. Ng, Matthew P. Lungren, Pranav Rajpurkar

We also find that VisualCheXbert better agrees with radiologists labeling chest X-ray images than do radiologists labeling the corresponding radiology reports by an average F1 score across several medical conditions of between 0. 12 (95% CI 0. 09, 0. 15) and 0. 21 (95% CI 0. 18, 0. 24).

CheXseg: Combining Expert Annotations with DNN-generated Saliency Maps for X-ray Segmentation

1 code implementation21 Feb 2021 Soham Gadgil, Mark Endo, Emily Wen, Andrew Y. Ng, Pranav Rajpurkar

Medical image segmentation models are typically supervised by expert annotations at the pixel-level, which can be expensive to acquire.

Knowledge Distillation Medical Image Segmentation

MedAug: Contrastive learning leveraging patient metadata improves representations for chest X-ray interpretation

no code implementations21 Feb 2021 Yen Nhi Truong Vu, Richard Wang, Niranjan Balachandar, Can Liu, Andrew Y. Ng, Pranav Rajpurkar

Our controlled experiments show that the keys to improving downstream performance on disease classification are (1) using patient metadata to appropriately create positive pairs from different images with the same underlying pathologies, and (2) maximizing the number of different images used in query pairing.

Contrastive Learning Fine-tuning

CheXternal: Generalization of Deep Learning Models for Chest X-ray Interpretation to Photos of Chest X-rays and External Clinical Settings

no code implementations17 Feb 2021 Pranav Rajpurkar, Anirudh Joshi, Anuj Pareek, Andrew Y. Ng, Matthew P. Lungren

Recent advances in training deep learning models have demonstrated the potential to provide accurate chest X-ray interpretation and increase access to radiology expertise.

CheXtransfer: Performance and Parameter Efficiency of ImageNet Models for Chest X-Ray Interpretation

no code implementations18 Jan 2021 Alexander Ke, William Ellsworth, Oishi Banerjee, Andrew Y. Ng, Pranav Rajpurkar

First, we find no relationship between ImageNet performance and CheXpert performance for both models without pretraining and models with pretraining.

MoCo-Pretraining Improves Representations and Transferability of Chest X-ray Models

no code implementations1 Jan 2021 Hari Sowrirajan, Jing Bo Yang, Andrew Y. Ng, Pranav Rajpurkar

Using 0. 1% of labeled training data, we find that a linear model trained on MoCo-pretrained representations outperforms one trained on representations without MoCo-pretraining by an AUC of 0. 096 (95% CI 0. 061, 0. 130), indicating that MoCo-pretrained representations are of higher quality.

Fine-tuning Image Classification +1

OGNet: Towards a Global Oil and Gas Infrastructure Database using Deep Learning on Remotely Sensed Imagery

no code implementations14 Nov 2020 Hao Sheng, Jeremy Irvin, Sasankh Munukutla, Shawn Zhang, Christopher Cross, Kyle Story, Rose Rustowicz, Cooper Elsworth, Zutao Yang, Mark Omara, Ritesh Gautam, Robert B. Jackson, Andrew Y. Ng

In this work, we develop deep learning algorithms that leverage freely available high-resolution aerial imagery to automatically detect oil and gas infrastructure, one of the largest contributors to global methane emissions.

CheXphotogenic: Generalization of Deep Learning Models for Chest X-ray Interpretation to Photos of Chest X-rays

no code implementations12 Nov 2020 Pranav Rajpurkar, Anirudh Joshi, Anuj Pareek, Jeremy Irvin, Andrew Y. Ng, Matthew Lungren

In this study, we measured the diagnostic performance for 8 different chest x-ray models when applied to photos of chest x-rays.

MoCo-CXR: MoCo Pretraining Improves Representation and Transferability of Chest X-ray Models

2 code implementations11 Oct 2020 Hari Sowrirajan, Jingbo Yang, Andrew Y. Ng, Pranav Rajpurkar

In this work, we propose MoCo-CXR, which is an adaptation of the contrastive learning method Momentum Contrast (MoCo), to produce models with better representations and initializations for the detection of pathologies in chest X-rays.

Contrastive Learning Fine-tuning +2

Short-Term Solar Irradiance Forecasting Using Calibrated Probabilistic Models

no code implementations9 Oct 2020 Eric Zelikman, Sharon Zhou, Jeremy Irvin, Cooper Raterink, Hao Sheng, Anand Avati, Jack Kelly, Ram Rajagopal, Andrew Y. Ng, David Gagne

Advancing probabilistic solar forecasting methods is essential to supporting the integration of solar energy into the electricity grid.

DLBCL-Morph: Morphological features computed using deep learning for an annotated digital DLBCL image set

1 code implementation17 Sep 2020 Damir Vrabac, Akshay Smit, Rebecca Rojansky, Yasodha Natkunam, Ranjana H. Advani, Andrew Y. Ng, Sebastian Fernandez-Pol, Pranav Rajpurkar

We used a deep learning model to segment all tumor nuclei in the ROIs, and computed several geometric features for each segmented nucleus.

Evaluating the Disentanglement of Deep Generative Models through Manifold Topology

1 code implementation ICLR 2021 Sharon Zhou, Eric Zelikman, Fred Lu, Andrew Y. Ng, Gunnar Carlsson, Stefano Ermon

Learning disentangled representations is regarded as a fundamental task for improving the generalization, robustness, and interpretability of generative models.

CheXpedition: Investigating Generalization Challenges for Translation of Chest X-Ray Algorithms to the Clinical Setting

no code implementations26 Feb 2020 Pranav Rajpurkar, Anirudh Joshi, Anuj Pareek, Phil Chen, Amirhossein Kiani, Jeremy Irvin, Andrew Y. Ng, Matthew P. Lungren

First, we find that the top 10 chest x-ray models on the CheXpert competition achieve an average AUC of 0. 851 on the task of detecting TB on two public TB datasets without fine-tuning or including the TB labels in training data.

Fine-tuning Translation

Data augmentation with Mobius transformations

1 code implementation7 Feb 2020 Sharon Zhou, Jiequan Zhang, Hang Jiang, Torbjorn Lundh, Andrew Y. Ng

Data augmentation has led to substantial improvements in the performance and generalization of deep models, and remain a highly adaptable method to evolving model architectures and varying amounts of data---in particular, extremely scarce amounts of available training data.

Data Augmentation Translation

NGBoost: Natural Gradient Boosting for Probabilistic Prediction

4 code implementations ICML 2020 Tony Duan, Anand Avati, Daisy Yi Ding, Khanh K. Thai, Sanjay Basu, Andrew Y. Ng, Alejandro Schuler

NGBoost generalizes gradient boosting to probabilistic regression by treating the parameters of the conditional distribution as targets for a multiparameter boosting algorithm.

Weather Forecasting

MURA: Large Dataset for Abnormality Detection in Musculoskeletal Radiographs

11 code implementations11 Dec 2017 Pranav Rajpurkar, Jeremy Irvin, Aarti Bagul, Daisy Ding, Tony Duan, Hershel Mehta, Brandon Yang, Kaylie Zhu, Dillon Laird, Robyn L. Ball, Curtis Langlotz, Katie Shpanskaya, Matthew P. Lungren, Andrew Y. Ng

To evaluate models robustly and to get an estimate of radiologist performance, we collect additional labels from six board-certified Stanford radiologists on the test set, consisting of 207 musculoskeletal studies.

Anomaly Detection

Cardiologist-Level Arrhythmia Detection with Convolutional Neural Networks

6 code implementations6 Jul 2017 Pranav Rajpurkar, Awni Y. Hannun, Masoumeh Haghpanahi, Codie Bourn, Andrew Y. Ng

We develop an algorithm which exceeds the performance of board certified cardiologists in detecting a wide range of heart arrhythmias from electrocardiograms recorded with a single-lead wearable monitor.

Arrhythmia Detection

First-Pass Large Vocabulary Continuous Speech Recognition using Bi-Directional Recurrent DNNs

4 code implementations12 Aug 2014 Awni Y. Hannun, Andrew L. Maas, Daniel Jurafsky, Andrew Y. Ng

This approach to decoding enables first-pass speech recognition with a language model, completely unaided by the cumbersome infrastructure of HMM-based systems.

Language Modelling Large Vocabulary Continuous Speech Recognition +1

Grounded Compositional Semantics for Finding and Describing Images with Sentences

no code implementations TACL 2014 Richard Socher, Andrej Karpathy, Quoc V. Le, Christopher D. Manning, Andrew Y. Ng

Previous work on Recursive Neural Networks (RNNs) shows that these models can produce compositional feature vectors for accurately representing and classifying sentences or images.

Large Scale Distributed Deep Networks

no code implementations NeurIPS 2012 Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Marc'Aurelio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, Quoc V. Le, Andrew Y. Ng

Recent work in unsupervised feature learning and deep learning has shown that being able to train large models can dramatically improve performance.

Object Recognition Speech Recognition

Emergence of Object-Selective Features in Unsupervised Feature Learning

no code implementations NeurIPS 2012 Adam Coates, Andrej Karpathy, Andrew Y. Ng

Recent work in unsupervised feature learning has focused on the goal of discovering high-level features from unlabeled images.

Sparse Filtering

no code implementations NeurIPS 2011 Jiquan Ngiam, Zhenghao Chen, Sonia A. Bhaskar, Pang W. Koh, Andrew Y. Ng

Unsupervised feature learning has been shown to be effective at learning representations that perform well on image, video and audio classification.

Audio Classification Classification +2

Unsupervised learning models of primary cortical receptive fields and receptive field plasticity

no code implementations NeurIPS 2011 Maneesh Bhand, Ritvik Mudur, Bipin Suresh, Andrew Saxe, Andrew Y. Ng

In this work we focus on that component of adaptation which occurs during an organism's lifetime, and show that a number of unsupervised feature learning algorithms can account for features of normal receptive field properties across multiple primary sensory cortices.

Selecting Receptive Fields in Deep Networks

no code implementations NeurIPS 2011 Adam Coates, Andrew Y. Ng

Recent deep learning and unsupervised feature learning systems that learn from unlabeled data have achieved high performance in benchmarks by using extremely large architectures with many features (hidden units) at each layer.

Tiled convolutional neural networks

no code implementations NeurIPS 2010 Jiquan Ngiam, Zhenghao Chen, Daniel Chia, Pang W. Koh, Quoc V. Le, Andrew Y. Ng

Using convolutional (tied) weights significantly reduces the number of parameters that have to be learned, and also allows translational invariance to be hard-coded into the architecture.

Object Recognition

Energy Disaggregation via Discriminative Sparse Coding

no code implementations NeurIPS 2010 J. Z. Kolter, Siddharth Batra, Andrew Y. Ng

Energy disaggregation is the task of taking a whole-home energy signal and separating it into its component appliances.

Structured Prediction

Measuring Invariances in Deep Networks

no code implementations NeurIPS 2009 Ian Goodfellow, Honglak Lee, Quoc V. Le, Andrew Saxe, Andrew Y. Ng

Our evaluation metrics can also be used to evaluate future work in unsupervised deep learning, and thus help the development of future algorithms.

Efficient multiple hyperparameter learning for log-linear models

no code implementations NeurIPS 2007 Chuan-Sheng Foo, Chuong B. Do, Andrew Y. Ng

Using multiple regularization hyperparameters is an effective method for managing model complexity in problems where input features have varying amounts of noise.

Structured Prediction

Sparse deep belief net model for visual area V2

no code implementations NeurIPS 2007 Honglak Lee, Chaitanya Ekanadham, Andrew Y. Ng

This suggests that our sparse variant of deep belief networks holds promise for modeling more higher-order features.

Latent Dirichlet Allocation

2 code implementations1 Jan 2003 David M. Blei, Andrew Y. Ng, Michael I. Jordan

Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities.

Collaborative Filtering Text Categorization +1

Cannot find the paper you are looking for? You can Submit a new open access paper.