Search Results for author: Andrew Y. Ng

Found 69 papers, 31 papers with code

Many-Shot In-Context Learning in Multimodal Foundation Models

1 code implementation16 May 2024 Yixing Jiang, Jeremy Irvin, Ji Hun Wang, Muhammad Ahmed Chaudhry, Jonathan H. Chen, Andrew Y. Ng

We show that batching up to 50 queries can lead to performance improvements under zero-shot and many-shot ICL, with substantial gains in the zero-shot setting on multiple datasets, while drastically reducing per-query cost and latency.

In-Context Learning

CloudTracks: A Dataset for Localizing Ship Tracks in Satellite Images of Clouds

no code implementations25 Jan 2024 Muhammad Ahmed Chaudhry, Lyna Kim, Jeremy Irvin, Yuzu Ido, Sonia Chu, Jared Thomas Isobe, Andrew Y. Ng, Duncan Watson-Parris

Anthropogenic emissions of aerosols can alter the albedo of clouds, but the extent of this effect, and its consequent impact on temperature change, remains uncertain.

Instance Segmentation Segmentation +1

USat: A Unified Self-Supervised Encoder for Multi-Sensor Satellite Imagery

1 code implementation2 Dec 2023 Jeremy Irvin, Lucas Tao, Joanne Zhou, Yuntao Ma, Langston Nashold, Benjamin Liu, Andrew Y. Ng

Large, self-supervised vision models have led to substantial advancements for automatically interpreting natural images.

An Empirical Study of Automated Mislabel Detection in Real World Vision Datasets

no code implementations2 Dec 2023 Maya Srikanth, Jeremy Irvin, Brian Wesley Hill, Felipe Godoy, Ishan Sabane, Andrew Y. Ng

We then apply SEMD to multiple real world computer vision datasets and test how dataset size, mislabel removal strategy, and mislabel removal amount further affect model performance after retraining on the cleaned data.


Weakly-semi-supervised object detection in remotely sensed imagery

no code implementations29 Nov 2023 Ji Hun Wang, Jeremy Irvin, Beri Kohen Behar, Ha Tran, Raghav Samavedam, Quentin Hsu, Andrew Y. Ng

We train WSSOD models which use large amounts of point-labeled images with varying fractions of bounding box labeled images in FAIR1M and a wind turbine detection dataset, and demonstrate that they substantially outperform fully supervised models trained with the same amount of bounding box labeled images on both datasets.

Object object-detection +2

Detecting Neighborhood Gentrification at Scale via Street-level Visual Data

no code implementations4 Jan 2023 Tianyuan Huang, Timothy Dai, Zhecheng Wang, Hesu Yoon, Hao Sheng, Andrew Y. Ng, Ram Rajagopal, Jackelyn Hwang

Neighborhood gentrification plays a significant role in shaping the social and economic well-being of both individuals and communities at large.


Improving debris flow evacuation alerts in Taiwan using machine learning

no code implementations27 Aug 2022 Yi-Lin Tsai, Jeremy Irvin, Suhas Chundi, Andrew Y. Ng, Christopher B. Field, Peter K. Kitanidis

Towards improving this system, we implemented five machine learning models that input historical rainfall data and predict whether a debris flow will occur within a selected time.

Deep Learning-Based Sparse Whole-Slide Image Analysis for the Diagnosis of Gastric Intestinal Metaplasia

1 code implementation5 Jan 2022 Jon Braatz, Pranav Rajpurkar, Stephanie Zhang, Andrew Y. Ng, Jeanne Shen

We develop an evaluation framework inspired by the early classification literature, in order to quantify the tradeoff between diagnostic performance and inference time for sparse analytic approaches.

Early Classification

Q-Pain: A Question Answering Dataset to Measure Social Bias in Pain Management

no code implementations3 Aug 2021 Cécile Logé, Emily Ross, David Yaw Amoah Dadey, Saahil Jain, Adriel Saporta, Andrew Y. Ng, Pranav Rajpurkar

Recent advances in Natural Language Processing (NLP), and specifically automated Question Answering (QA) systems, have demonstrated both impressive linguistic fluency and a pernicious tendency to reflect social biases.

Decision Making Experimental Design +2

RadGraph: Extracting Clinical Entities and Relations from Radiology Reports

1 code implementation28 Jun 2021 Saahil Jain, Ashwin Agrawal, Adriel Saporta, Steven QH Truong, Du Nguyen Duong, Tan Bui, Pierre Chambon, Yuhao Zhang, Matthew P. Lungren, Andrew Y. Ng, Curtis P. Langlotz, Pranav Rajpurkar

We release a development dataset, which contains board-certified radiologist annotations for 500 radiology reports from the MIMIC-CXR dataset (14, 579 entities and 10, 889 relations), and a test dataset, which contains two independent sets of board-certified radiologist annotations for 100 radiology reports split equally across the MIMIC-CXR and CheXpert datasets.

Relation Extraction

Learning Neighborhood Representation from Multi-Modal Multi-Graph: Image, Text, Mobility Graph and Beyond

no code implementations6 May 2021 Tianyuan Huang, Zhecheng Wang, Hao Sheng, Andrew Y. Ng, Ram Rajagopal

Recent urbanization has coincided with the enrichment of geotagged data, such as street view and point-of-interest (POI).

3KG: Contrastive Learning of 12-Lead Electrocardiograms using Physiologically-Inspired Augmentations

no code implementations21 Apr 2021 Bryan Gopal, Ryan W. Han, Gautham Raghupathi, Andrew Y. Ng, Geoffrey H. Tison, Pranav Rajpurkar

We propose 3KG, a physiologically-inspired contrastive learning approach that generates views using 3D augmentations of the 12-lead electrocardiogram.

Contrastive Learning Time Series Analysis

Effect of Radiology Report Labeler Quality on Deep Learning Models for Chest X-Ray Interpretation

no code implementations1 Apr 2021 Saahil Jain, Akshay Smit, Andrew Y. Ng, Pranav Rajpurkar

Next, after training image classification models using labels generated from the different radiology report labelers on one of the largest datasets of chest X-rays, we show that an image classification model trained on labels from the VisualCheXbert labeler outperforms image classification models trained on labels from the CheXpert and CheXbert labelers.

Classification General Classification +1

MedSelect: Selective Labeling for Medical Image Classification Combining Meta-Learning with Deep Reinforcement Learning

1 code implementation26 Mar 2021 Akshay Smit, Damir Vrabac, Yujie He, Andrew Y. Ng, Andrew L. Beam, Pranav Rajpurkar

We propose a selective learning method using meta-learning and deep reinforcement learning for medical image interpretation in the setting of limited labeling resources.

General Classification Image Classification +3

CheXbreak: Misclassification Identification for Deep Learning Models Interpreting Chest X-rays

no code implementations18 Mar 2021 Emma Chen, Andy Kim, Rayan Krishnan, Jin Long, Andrew Y. Ng, Pranav Rajpurkar

A major obstacle to the integration of deep learning models for chest x-ray interpretation into clinical settings is the lack of understanding of their failure modes.

CheXseen: Unseen Disease Detection for Deep Learning Interpretation of Chest X-rays

no code implementations8 Mar 2021 Siyu Shi, Ishaan Malhi, Kevin Tran, Andrew Y. Ng, Pranav Rajpurkar

Second, we evaluate whether models trained on seen diseases can detect seen diseases when co-occurring with diseases outside the subset (unseen diseases).

VisualCheXbert: Addressing the Discrepancy Between Radiology Report Labels and Image Labels

1 code implementation23 Feb 2021 Saahil Jain, Akshay Smit, Steven QH Truong, Chanh DT Nguyen, Minh-Thanh Huynh, Mudit Jain, Victoria A. Young, Andrew Y. Ng, Matthew P. Lungren, Pranav Rajpurkar

We also find that VisualCheXbert better agrees with radiologists labeling chest X-ray images than do radiologists labeling the corresponding radiology reports by an average F1 score across several medical conditions of between 0. 12 (95% CI 0. 09, 0. 15) and 0. 21 (95% CI 0. 18, 0. 24).

MedAug: Contrastive learning leveraging patient metadata improves representations for chest X-ray interpretation

no code implementations21 Feb 2021 Yen Nhi Truong Vu, Richard Wang, Niranjan Balachandar, Can Liu, Andrew Y. Ng, Pranav Rajpurkar

Our controlled experiments show that the keys to improving downstream performance on disease classification are (1) using patient metadata to appropriately create positive pairs from different images with the same underlying pathologies, and (2) maximizing the number of different images used in query pairing.

Contrastive Learning

CheXseg: Combining Expert Annotations with DNN-generated Saliency Maps for X-ray Segmentation

1 code implementation21 Feb 2021 Soham Gadgil, Mark Endo, Emily Wen, Andrew Y. Ng, Pranav Rajpurkar

Medical image segmentation models are typically supervised by expert annotations at the pixel-level, which can be expensive to acquire.

Image Segmentation Knowledge Distillation +3

CheXternal: Generalization of Deep Learning Models for Chest X-ray Interpretation to Photos of Chest X-rays and External Clinical Settings

1 code implementation17 Feb 2021 Pranav Rajpurkar, Anirudh Joshi, Anuj Pareek, Andrew Y. Ng, Matthew P. Lungren

Recent advances in training deep learning models have demonstrated the potential to provide accurate chest X-ray interpretation and increase access to radiology expertise.

CheXtransfer: Performance and Parameter Efficiency of ImageNet Models for Chest X-Ray Interpretation

no code implementations18 Jan 2021 Alexander Ke, William Ellsworth, Oishi Banerjee, Andrew Y. Ng, Pranav Rajpurkar

First, we find no relationship between ImageNet performance and CheXpert performance for both models without pretraining and models with pretraining.

MoCo-Pretraining Improves Representations and Transferability of Chest X-ray Models

no code implementations1 Jan 2021 Hari Sowrirajan, Jing Bo Yang, Andrew Y. Ng, Pranav Rajpurkar

Using 0. 1% of labeled training data, we find that a linear model trained on MoCo-pretrained representations outperforms one trained on representations without MoCo-pretraining by an AUC of 0. 096 (95% CI 0. 061, 0. 130), indicating that MoCo-pretrained representations are of higher quality.

Image Classification Transfer Learning

OGNet: Towards a Global Oil and Gas Infrastructure Database using Deep Learning on Remotely Sensed Imagery

no code implementations14 Nov 2020 Hao Sheng, Jeremy Irvin, Sasankh Munukutla, Shawn Zhang, Christopher Cross, Kyle Story, Rose Rustowicz, Cooper Elsworth, Zutao Yang, Mark Omara, Ritesh Gautam, Robert B. Jackson, Andrew Y. Ng

In this work, we develop deep learning algorithms that leverage freely available high-resolution aerial imagery to automatically detect oil and gas infrastructure, one of the largest contributors to global methane emissions.


CheXphotogenic: Generalization of Deep Learning Models for Chest X-ray Interpretation to Photos of Chest X-rays

no code implementations12 Nov 2020 Pranav Rajpurkar, Anirudh Joshi, Anuj Pareek, Jeremy Irvin, Andrew Y. Ng, Matthew Lungren

In this study, we measured the diagnostic performance for 8 different chest x-ray models when applied to photos of chest x-rays.

ForestNet: Classifying Drivers of Deforestation in Indonesia using Deep Learning on Satellite Imagery

1 code implementation11 Nov 2020 Jeremy Irvin, Hao Sheng, Neel Ramachandran, Sonja Johnson-Yu, Sharon Zhou, Kyle Story, Rose Rustowicz, Cooper Elsworth, Kemen Austin, Andrew Y. Ng

Characterizing the processes leading to deforestation is critical to the development and implementation of targeted forest conservation and management policies.

General Classification Management

MoCo-CXR: MoCo Pretraining Improves Representation and Transferability of Chest X-ray Models

2 code implementations11 Oct 2020 Hari Sowrirajan, Jingbo Yang, Andrew Y. Ng, Pranav Rajpurkar

In this work, we propose MoCo-CXR, which is an adaptation of the contrastive learning method Momentum Contrast (MoCo), to produce models with better representations and initializations for the detection of pathologies in chest X-rays.

Contrastive Learning Image Classification +1

Evaluating the Disentanglement of Deep Generative Models through Manifold Topology

1 code implementation ICLR 2021 Sharon Zhou, Eric Zelikman, Fred Lu, Andrew Y. Ng, Gunnar Carlsson, Stefano Ermon

Learning disentangled representations is regarded as a fundamental task for improving the generalization, robustness, and interpretability of generative models.


CheXpedition: Investigating Generalization Challenges for Translation of Chest X-Ray Algorithms to the Clinical Setting

no code implementations26 Feb 2020 Pranav Rajpurkar, Anirudh Joshi, Anuj Pareek, Phil Chen, Amirhossein Kiani, Jeremy Irvin, Andrew Y. Ng, Matthew P. Lungren

First, we find that the top 10 chest x-ray models on the CheXpert competition achieve an average AUC of 0. 851 on the task of detecting TB on two public TB datasets without fine-tuning or including the TB labels in training data.


Data augmentation with Mobius transformations

1 code implementation7 Feb 2020 Sharon Zhou, Jiequan Zhang, Hang Jiang, Torbjorn Lundh, Andrew Y. Ng

Data augmentation has led to substantial improvements in the performance and generalization of deep models, and remain a highly adaptable method to evolving model architectures and varying amounts of data---in particular, extremely scarce amounts of available training data.

Data Augmentation Translation

NGBoost: Natural Gradient Boosting for Probabilistic Prediction

4 code implementations ICML 2020 Tony Duan, Anand Avati, Daisy Yi Ding, Khanh K. Thai, Sanjay Basu, Andrew Y. Ng, Alejandro Schuler

NGBoost generalizes gradient boosting to probabilistic regression by treating the parameters of the conditional distribution as targets for a multiparameter boosting algorithm.

regression Weather Forecasting

MURA: Large Dataset for Abnormality Detection in Musculoskeletal Radiographs

11 code implementations11 Dec 2017 Pranav Rajpurkar, Jeremy Irvin, Aarti Bagul, Daisy Ding, Tony Duan, Hershel Mehta, Brandon Yang, Kaylie Zhu, Dillon Laird, Robyn L. Ball, Curtis Langlotz, Katie Shpanskaya, Matthew P. Lungren, Andrew Y. Ng

To evaluate models robustly and to get an estimate of radiologist performance, we collect additional labels from six board-certified Stanford radiologists on the test set, consisting of 207 musculoskeletal studies.

Anomaly Detection Specificity

Cardiologist-Level Arrhythmia Detection with Convolutional Neural Networks

7 code implementations6 Jul 2017 Pranav Rajpurkar, Awni Y. Hannun, Masoumeh Haghpanahi, Codie Bourn, Andrew Y. Ng

We develop an algorithm which exceeds the performance of board certified cardiologists in detecting a wide range of heart arrhythmias from electrocardiograms recorded with a single-lead wearable monitor.

Arrhythmia Detection

Neural Language Correction with Character-Based Attention

3 code implementations31 Mar 2016 Ziang Xie, Anand Avati, Naveen Arivazhagan, Dan Jurafsky, Andrew Y. Ng

Motivated by these issues, we present a neural network-based approach to language correction.

Decoder Language Modelling +2

First-Pass Large Vocabulary Continuous Speech Recognition using Bi-Directional Recurrent DNNs

5 code implementations12 Aug 2014 Awni Y. Hannun, Andrew L. Maas, Daniel Jurafsky, Andrew Y. Ng

This approach to decoding enables first-pass speech recognition with a language model, completely unaided by the cumbersome infrastructure of HMM-based systems.

Language Modelling speech-recognition +1

Building DNN Acoustic Models for Large Vocabulary Speech Recognition

1 code implementation30 Jun 2014 Andrew L. Maas, Peng Qi, Ziang Xie, Awni Y. Hannun, Christopher T. Lengerich, Daniel Jurafsky, Andrew Y. Ng

We compare standard DNNs to convolutional networks, and present the first experiments using locally-connected, untied neural networks for acoustic modeling.

speech-recognition Speech Recognition

Grounded Compositional Semantics for Finding and Describing Images with Sentences

no code implementations TACL 2014 Richard Socher, Andrej Karpathy, Quoc V. Le, Christopher D. Manning, Andrew Y. Ng

Previous work on Recursive Neural Networks (RNNs) shows that these models can produce compositional feature vectors for accurately representing and classifying sentences or images.


Emergence of Object-Selective Features in Unsupervised Feature Learning

no code implementations NeurIPS 2012 Adam Coates, Andrej Karpathy, Andrew Y. Ng

Recent work in unsupervised feature learning has focused on the goal of discovering high-level features from unlabeled images.


Selecting Receptive Fields in Deep Networks

no code implementations NeurIPS 2011 Adam Coates, Andrew Y. Ng

Recent deep learning and unsupervised feature learning systems that learn from unlabeled data have achieved high performance in benchmarks by using extremely large architectures with many features (hidden units) at each layer.

Unsupervised learning models of primary cortical receptive fields and receptive field plasticity

no code implementations NeurIPS 2011 Maneesh Bhand, Ritvik Mudur, Bipin Suresh, Andrew Saxe, Andrew Y. Ng

In this work we focus on that component of adaptation which occurs during an organism's lifetime, and show that a number of unsupervised feature learning algorithms can account for features of normal receptive field properties across multiple primary sensory cortices.

Sparse Filtering

no code implementations NeurIPS 2011 Jiquan Ngiam, Zhenghao Chen, Sonia A. Bhaskar, Pang W. Koh, Andrew Y. Ng

Unsupervised feature learning has been shown to be effective at learning representations that perform well on image, video and audio classification.

Audio Classification General Classification

Energy Disaggregation via Discriminative Sparse Coding

no code implementations NeurIPS 2010 J. Z. Kolter, Siddharth Batra, Andrew Y. Ng

Energy disaggregation is the task of taking a whole-home energy signal and separating it into its component appliances.

Structured Prediction

Tiled convolutional neural networks

no code implementations NeurIPS 2010 Jiquan Ngiam, Zhenghao Chen, Daniel Chia, Pang W. Koh, Quoc V. Le, Andrew Y. Ng

Using convolutional (tied) weights significantly reduces the number of parameters that have to be learned, and also allows translational invariance to be hard-coded into the architecture.

Object Recognition

Measuring Invariances in Deep Networks

no code implementations NeurIPS 2009 Ian Goodfellow, Honglak Lee, Quoc V. Le, Andrew Saxe, Andrew Y. Ng

Our evaluation metrics can also be used to evaluate future work in unsupervised deep learning, and thus help the development of future algorithms.

Sparse deep belief net model for visual area V2

no code implementations NeurIPS 2007 Honglak Lee, Chaitanya Ekanadham, Andrew Y. Ng

This suggests that our sparse variant of deep belief networks holds promise for modeling more higher-order features.

Efficient multiple hyperparameter learning for log-linear models

no code implementations NeurIPS 2007 Chuan-Sheng Foo, Chuong B. Do, Andrew Y. Ng

Using multiple regularization hyperparameters is an effective method for managing model complexity in problems where input features have varying amounts of noise.

Structured Prediction

Latent Dirichlet Allocation

2 code implementations1 Jan 2003 David M. Blei, Andrew Y. Ng, Michael I. Jordan

Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities.

Collaborative Filtering Text Categorization +2

Cannot find the paper you are looking for? You can Submit a new open access paper.