no code implementations • 4 Jan 2023 • Tianyuan Huang, Timothy Dai, Zhecheng Wang, Hesu Yoon, Hao Sheng, Andrew Y. Ng, Ram Rajagopal, Jackelyn Hwang
Neighborhood gentrification plays a significant role in shaping the social and economic well-being of both individuals and communities at large.
no code implementations • 27 Aug 2022 • Yi-Lin Tsai, Jeremy Irvin, Suhas Chundi, Andrew Y. Ng, Christopher B. Field, Peter K. Kitanidis
Towards improving this system, we implemented five machine learning models that input historical rainfall data and predict whether a debris flow will occur within a selected time.
no code implementations • 22 Jul 2022 • Bryan Zhu, Nicholas Lui, Jeremy Irvin, Jimmy Le, Sahil Tadwalkar, Chenghao Wang, Zutao Ouyang, Frankie Y. Liu, Andrew Y. Ng, Robert B. Jackson
Reducing methane emissions is essential for mitigating global warming.
1 code implementation • 5 Jan 2022 • Jon Braatz, Pranav Rajpurkar, Stephanie Zhang, Andrew Y. Ng, Jeanne Shen
We develop an evaluation framework inspired by the early classification literature, in order to quantify the tradeoff between diagnostic performance and inference time for sparse analytic approaches.
no code implementations • 3 Aug 2021 • Cécile Logé, Emily Ross, David Yaw Amoah Dadey, Saahil Jain, Adriel Saporta, Andrew Y. Ng, Pranav Rajpurkar
Recent advances in Natural Language Processing (NLP), and specifically automated Question Answering (QA) systems, have demonstrated both impressive linguistic fluency and a pernicious tendency to reflect social biases.
no code implementations • 28 Jun 2021 • Saahil Jain, Ashwin Agrawal, Adriel Saporta, Steven QH Truong, Du Nguyen Duong, Tan Bui, Pierre Chambon, Yuhao Zhang, Matthew P. Lungren, Andrew Y. Ng, Curtis P. Langlotz, Pranav Rajpurkar
We release a development dataset, which contains board-certified radiologist annotations for 500 radiology reports from the MIMIC-CXR dataset (14, 579 entities and 10, 889 relations), and a test dataset, which contains two independent sets of board-certified radiologist annotations for 100 radiology reports split equally across the MIMIC-CXR and CheXpert datasets.
no code implementations • 6 May 2021 • Tianyuan Huang, Zhecheng Wang, Hao Sheng, Andrew Y. Ng, Ram Rajagopal
Recent urbanization has coincided with the enrichment of geotagged data, such as street view and point-of-interest (POI).
no code implementations • 21 Apr 2021 • Bryan Gopal, Ryan W. Han, Gautham Raghupathi, Andrew Y. Ng, Geoffrey H. Tison, Pranav Rajpurkar
We propose 3KG, a physiologically-inspired contrastive learning approach that generates views using 3D augmentations of the 12-lead electrocardiogram.
no code implementations • 1 Apr 2021 • Saahil Jain, Akshay Smit, Andrew Y. Ng, Pranav Rajpurkar
Next, after training image classification models using labels generated from the different radiology report labelers on one of the largest datasets of chest X-rays, we show that an image classification model trained on labels from the VisualCheXbert labeler outperforms image classification models trained on labels from the CheXpert and CheXbert labelers.
1 code implementation • 26 Mar 2021 • Akshay Smit, Damir Vrabac, Yujie He, Andrew Y. Ng, Andrew L. Beam, Pranav Rajpurkar
We propose a selective learning method using meta-learning and deep reinforcement learning for medical image interpretation in the setting of limited labeling resources.
no code implementations • 18 Mar 2021 • Emma Chen, Andy Kim, Rayan Krishnan, Jin Long, Andrew Y. Ng, Pranav Rajpurkar
A major obstacle to the integration of deep learning models for chest x-ray interpretation into clinical settings is the lack of understanding of their failure modes.
no code implementations • 8 Mar 2021 • Siyu Shi, Ishaan Malhi, Kevin Tran, Andrew Y. Ng, Pranav Rajpurkar
Second, we evaluate whether models trained on seen diseases can detect seen diseases when co-occurring with diseases outside the subset (unseen diseases).
1 code implementation • 23 Feb 2021 • Saahil Jain, Akshay Smit, Steven QH Truong, Chanh DT Nguyen, Minh-Thanh Huynh, Mudit Jain, Victoria A. Young, Andrew Y. Ng, Matthew P. Lungren, Pranav Rajpurkar
We also find that VisualCheXbert better agrees with radiologists labeling chest X-ray images than do radiologists labeling the corresponding radiology reports by an average F1 score across several medical conditions of between 0. 12 (95% CI 0. 09, 0. 15) and 0. 21 (95% CI 0. 18, 0. 24).
no code implementations • 21 Feb 2021 • Yen Nhi Truong Vu, Richard Wang, Niranjan Balachandar, Can Liu, Andrew Y. Ng, Pranav Rajpurkar
Our controlled experiments show that the keys to improving downstream performance on disease classification are (1) using patient metadata to appropriately create positive pairs from different images with the same underlying pathologies, and (2) maximizing the number of different images used in query pairing.
1 code implementation • 21 Feb 2021 • Soham Gadgil, Mark Endo, Emily Wen, Andrew Y. Ng, Pranav Rajpurkar
Medical image segmentation models are typically supervised by expert annotations at the pixel-level, which can be expensive to acquire.
no code implementations • 17 Feb 2021 • Pranav Rajpurkar, Anirudh Joshi, Anuj Pareek, Andrew Y. Ng, Matthew P. Lungren
Recent advances in training deep learning models have demonstrated the potential to provide accurate chest X-ray interpretation and increase access to radiology expertise.
no code implementations • 18 Jan 2021 • Alexander Ke, William Ellsworth, Oishi Banerjee, Andrew Y. Ng, Pranav Rajpurkar
First, we find no relationship between ImageNet performance and CheXpert performance for both models without pretraining and models with pretraining.
no code implementations • 1 Jan 2021 • Hari Sowrirajan, Jing Bo Yang, Andrew Y. Ng, Pranav Rajpurkar
Using 0. 1% of labeled training data, we find that a linear model trained on MoCo-pretrained representations outperforms one trained on representations without MoCo-pretraining by an AUC of 0. 096 (95% CI 0. 061, 0. 130), indicating that MoCo-pretrained representations are of higher quality.
no code implementations • 14 Nov 2020 • Hao Sheng, Jeremy Irvin, Sasankh Munukutla, Shawn Zhang, Christopher Cross, Kyle Story, Rose Rustowicz, Cooper Elsworth, Zutao Yang, Mark Omara, Ritesh Gautam, Robert B. Jackson, Andrew Y. Ng
In this work, we develop deep learning algorithms that leverage freely available high-resolution aerial imagery to automatically detect oil and gas infrastructure, one of the largest contributors to global methane emissions.
no code implementations • 12 Nov 2020 • Pranav Rajpurkar, Anirudh Joshi, Anuj Pareek, Jeremy Irvin, Andrew Y. Ng, Matthew Lungren
In this study, we measured the diagnostic performance for 8 different chest x-ray models when applied to photos of chest x-rays.
1 code implementation • 11 Nov 2020 • Jeremy Irvin, Hao Sheng, Neel Ramachandran, Sonja Johnson-Yu, Sharon Zhou, Kyle Story, Rose Rustowicz, Cooper Elsworth, Kemen Austin, Andrew Y. Ng
Characterizing the processes leading to deforestation is critical to the development and implementation of targeted forest conservation and management policies.
no code implementations • 28 Oct 2020 • Viswesh Krishna, Anirudh Joshi, Philip L. Bulterys, Eric Yang, Andrew Y. Ng, Pranav Rajpurkar
The application of deep learning to pathology assumes the existence of digital whole slide images of pathology slides.
2 code implementations • 11 Oct 2020 • Hari Sowrirajan, Jingbo Yang, Andrew Y. Ng, Pranav Rajpurkar
In this work, we propose MoCo-CXR, which is an adaptation of the contrastive learning method Momentum Contrast (MoCo), to produce models with better representations and initializations for the detection of pathologies in chest X-rays.
no code implementations • 9 Oct 2020 • Eric Zelikman, Sharon Zhou, Jeremy Irvin, Cooper Raterink, Hao Sheng, Anand Avati, Jack Kelly, Ram Rajagopal, Andrew Y. Ng, David Gagne
Advancing probabilistic solar forecasting methods is essential to supporting the integration of solar energy into the electricity grid.
1 code implementation • 17 Sep 2020 • Damir Vrabac, Akshay Smit, Rebecca Rojansky, Yasodha Natkunam, Ranjana H. Advani, Andrew Y. Ng, Sebastian Fernandez-Pol, Pranav Rajpurkar
We used a deep learning model to segment all tumor nuclei in the ROIs, and computed several geometric features for each segmented nucleus.
1 code implementation • 13 Jul 2020 • Nick A. Phillips, Pranav Rajpurkar, Mark Sabini, Rayan Krishnan, Sharon Zhou, Anuj Pareek, Nguyet Minh Phu, Chris Wang, Mudit Jain, Nguyen Duong Du, Steven QH Truong, Andrew Y. Ng, Matthew P. Lungren
We introduce CheXphoto, a dataset of smartphone photos and synthetic photographic transformations of chest x-rays sampled from the CheXpert dataset.
1 code implementation • ICLR 2021 • Sharon Zhou, Eric Zelikman, Fred Lu, Andrew Y. Ng, Gunnar Carlsson, Stefano Ermon
Learning disentangled representations is regarded as a fundamental task for improving the generalization, robustness, and interpretability of generative models.
3 code implementations • 20 Apr 2020 • Akshay Smit, Saahil Jain, Pranav Rajpurkar, Anuj Pareek, Andrew Y. Ng, Matthew P. Lungren
The extraction of labels from radiology text reports enables large-scale training of medical imaging models.
no code implementations • 26 Feb 2020 • Pranav Rajpurkar, Anirudh Joshi, Anuj Pareek, Phil Chen, Amirhossein Kiani, Jeremy Irvin, Andrew Y. Ng, Matthew P. Lungren
First, we find that the top 10 chest x-ray models on the CheXpert competition achieve an average AUC of 0. 851 on the task of detecting TB on two public TB datasets without fine-tuning or including the TB labels in training data.
1 code implementation • 7 Feb 2020 • Sharon Zhou, Jiequan Zhang, Hang Jiang, Torbjorn Lundh, Andrew Y. Ng
Data augmentation has led to substantial improvements in the performance and generalization of deep models, and remain a highly adaptable method to evolving model architectures and varying amounts of data---in particular, extremely scarce amounts of available training data.
4 code implementations • ICML 2020 • Tony Duan, Anand Avati, Daisy Yi Ding, Khanh K. Thai, Sanjay Basu, Andrew Y. Ng, Alejandro Schuler
NGBoost generalizes gradient boosting to probabilistic regression by treating the parameters of the conditional distribution as targets for a multiparameter boosting algorithm.
3 code implementations • 10 Jun 2019 • David Rolnick, Priya L. Donti, Lynn H. Kaack, Kelly Kochanski, Alexandre Lacoste, Kris Sankaran, Andrew Slavin Ross, Nikola Milojevic-Dupont, Natasha Jaques, Anna Waldman-Brown, Alexandra Luccioni, Tegan Maharaj, Evan D. Sherwin, S. Karthik Mukkavilli, Konrad P. Kording, Carla Gomes, Andrew Y. Ng, Demis Hassabis, John C. Platt, Felix Creutzig, Jennifer Chayes, Yoshua Bengio
Climate change is one of the greatest challenges facing humanity, and we, as machine learning experts, may wonder how we can help.
12 code implementations • 21 Jan 2019 • Jeremy Irvin, Pranav Rajpurkar, Michael Ko, Yifan Yu, Silviana Ciurea-Ilcus, Chris Chute, Henrik Marklund, Behzad Haghgoo, Robyn Ball, Katie Shpanskaya, Jayne Seekins, David A. Mong, Safwan S. Halabi, Jesse K. Sandberg, Ricky Jones, David B. Larson, Curtis P. Langlotz, Bhavik N. Patel, Matthew P. Lungren, Andrew Y. Ng
On a validation set of 200 chest radiographic studies which were manually annotated by 3 board-certified radiologists, we find that different uncertainty approaches are useful for different pathologies.
Ranked #93 on
Multi-Label Classification
on CheXpert
no code implementations • Nature Medicine 2019 • Awni Y. Hannun, Pranav Rajpurkar, Masoumeh Haghpanahi, Geoffrey H. Tison, Codie Bourn, Mintu P. Turakhia, Andrew Y. Ng
With specificity fixed at the average specificity achieved by cardiologists, the sensitivity of the DNN exceeded the average cardiologist sensitivity for all rhythm classes.
1 code implementation • Medicine 2018 • Nicholas Bien, Pranav Rajpurkar, Robyn L. Ball, Jeremy Irvin, Allison Park, Erik Jones, Michael Bereket, Bhavik N. Patel, Kristen W. Yeom, Katie Shpanskaya, Safwan Halabi, Evan Zucker, Gary Fanton, Derek F. Amanatullah, Christopher F. Beaulieu, Geoffrey M. Riley, Russell J. Stewart, Francis G. Blankenberg, David B. Larson, Ricky H. Jones, Curtis P. Langlotz, Andrew Y. Ng, Matthew P. Lungren
Magnetic resonance imaging (MRI) of the knee is the preferred method for diagnosing knee injuries.
11 code implementations • 11 Dec 2017 • Pranav Rajpurkar, Jeremy Irvin, Aarti Bagul, Daisy Ding, Tony Duan, Hershel Mehta, Brandon Yang, Kaylie Zhu, Dillon Laird, Robyn L. Ball, Curtis Langlotz, Katie Shpanskaya, Matthew P. Lungren, Andrew Y. Ng
To evaluate models robustly and to get an estimate of radiologist performance, we collect additional labels from six board-certified Stanford radiologists on the test set, consisting of 207 musculoskeletal studies.
47 code implementations • 14 Nov 2017 • Pranav Rajpurkar, Jeremy Irvin, Kaylie Zhu, Brandon Yang, Hershel Mehta, Tony Duan, Daisy Ding, Aarti Bagul, Curtis Langlotz, Katie Shpanskaya, Matthew P. Lungren, Andrew Y. Ng
We develop an algorithm that can detect pneumonia from chest X-rays at a level exceeding practicing radiologists.
Ranked #3 on
Pneumonia Detection
on ChestX-ray14
7 code implementations • 6 Jul 2017 • Pranav Rajpurkar, Awni Y. Hannun, Masoumeh Haghpanahi, Codie Bourn, Andrew Y. Ng
We develop an algorithm which exceeds the performance of board certified cardiologists in detecting a wide range of heart arrhythmias from electrocardiograms recorded with a single-lead wearable monitor.
no code implementations • 7 Mar 2017 • Ziang Xie, Sida I. Wang, Jiwei Li, Daniel Lévy, Aiming Nie, Dan Jurafsky, Andrew Y. Ng
Data noising is an effective technique for regularizing neural network models.
3 code implementations • 31 Mar 2016 • Ziang Xie, Anand Avati, Naveen Arivazhagan, Dan Jurafsky, Andrew Y. Ng
Motivated by these issues, we present a neural network-based approach to language correction.
no code implementations • 7 Apr 2015 • Brody Huval, Tao Wang, Sameep Tandon, Jeff Kiske, Will Song, Joel Pazhayampallil, Mykhaylo Andriluka, Pranav Rajpurkar, Toki Migimatsu, Royce Cheng-Yue, Fernando Mujica, Adam Coates, Andrew Y. Ng
We collect a large data set of highway data and apply deep learning and computer vision algorithms to problems such as car and lane detection.
Ranked #2 on
Lane Detection
on Caltech Lanes Cordova
24 code implementations • 17 Dec 2014 • Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates, Andrew Y. Ng
We present a state-of-the-art speech recognition system developed using end-to-end deep learning.
4 code implementations • 12 Aug 2014 • Awni Y. Hannun, Andrew L. Maas, Daniel Jurafsky, Andrew Y. Ng
This approach to decoding enables first-pass speech recognition with a language model, completely unaided by the cumbersome infrastructure of HMM-based systems.
1 code implementation • 30 Jun 2014 • Andrew L. Maas, Peng Qi, Ziang Xie, Awni Y. Hannun, Christopher T. Lengerich, Daniel Jurafsky, Andrew Y. Ng
We compare standard DNNs to convolutional networks, and present the first experiments using locally-connected, untied neural networks for acoustic modeling.
Ranked #11 on
Speech Recognition
on swb_hub_500 WER fullSWBCH
no code implementations • TACL 2014 • Richard Socher, Andrej Karpathy, Quoc V. Le, Christopher D. Manning, Andrew Y. Ng
Previous work on Recursive Neural Networks (RNNs) shows that these models can produce compositional feature vectors for accurately representing and classifying sentences or images.
2 code implementations • NeurIPS 2013 • Richard Socher, Milind Ganjoo, Hamsa Sridhar, Osbert Bastani, Christopher D. Manning, Andrew Y. Ng
This work introduces a model that can recognize objects in images even if no training data is available for the objects.
no code implementations • NeurIPS 2012 • Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Marc'Aurelio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, Quoc V. Le, Andrew Y. Ng
Recent work in unsupervised feature learning and deep learning has shown that being able to train large models can dramatically improve performance.
no code implementations • NeurIPS 2012 • Adam Coates, Andrej Karpathy, Andrew Y. Ng
Recent work in unsupervised feature learning has focused on the goal of discovering high-level features from unlabeled images.
1 code implementation • 29 Dec 2011 • Quoc V. Le, Marc'Aurelio Ranzato, Rajat Monga, Matthieu Devin, Kai Chen, Greg S. Corrado, Jeff Dean, Andrew Y. Ng
For example, is it possible to learn a face detector using only unlabeled images?
no code implementations • NeurIPS 2011 • Jiquan Ngiam, Zhenghao Chen, Sonia A. Bhaskar, Pang W. Koh, Andrew Y. Ng
Unsupervised feature learning has been shown to be effective at learning representations that perform well on image, video and audio classification.
no code implementations • NeurIPS 2011 • Adam Coates, Andrew Y. Ng
Recent deep learning and unsupervised feature learning systems that learn from unlabeled data have achieved high performance in benchmarks by using extremely large architectures with many features (hidden units) at each layer.
no code implementations • NeurIPS 2011 • Maneesh Bhand, Ritvik Mudur, Bipin Suresh, Andrew Saxe, Andrew Y. Ng
In this work we focus on that component of adaptation which occurs during an organism's lifetime, and show that a number of unsupervised feature learning algorithms can account for features of normal receptive field properties across multiple primary sensory cortices.
no code implementations • NeurIPS 2011 • Quoc V. Le, Alexandre Karpenko, Jiquan Ngiam, Andrew Y. Ng
We show that the soft reconstruction cost can also be used to prevent replicated features in tiled convolutional neural networks.
Ranked #116 on
Image Classification
on STL-10
2 code implementations • Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies 2011 • Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng
We present a model that uses a mix of unsupervised and supervised techniques to learn word vectors capturing semantic term–document information as well as rich sentiment content.
no code implementations • NeurIPS 2010 • J. Z. Kolter, Siddharth Batra, Andrew Y. Ng
Energy disaggregation is the task of taking a whole-home energy signal and separating it into its component appliances.
no code implementations • NeurIPS 2010 • Jiquan Ngiam, Zhenghao Chen, Daniel Chia, Pang W. Koh, Quoc V. Le, Andrew Y. Ng
Using convolutional (tied) weights significantly reduces the number of parameters that have to be learned, and also allows translational invariance to be hard-coded into the architecture.
no code implementations • NeurIPS 2009 • Honglak Lee, Peter Pham, Yan Largman, Andrew Y. Ng
In this paper, we apply convolutional deep belief networks to audio data and empirically evaluate them on various audio classification tasks.
no code implementations • NeurIPS 2009 • Ian Goodfellow, Honglak Lee, Quoc V. Le, Andrew Saxe, Andrew Y. Ng
Our evaluation metrics can also be used to evaluate future work in unsupervised deep learning, and thus help the development of future algorithms.
no code implementations • NeurIPS 2007 • Chuan-Sheng Foo, Chuong B. Do, Andrew Y. Ng
Using multiple regularization hyperparameters is an effective method for managing model complexity in problems where input features have varying amounts of noise.
no code implementations • NeurIPS 2007 • Honglak Lee, Chaitanya Ekanadham, Andrew Y. Ng
This suggests that our sparse variant of deep belief networks holds promise for modeling more higher-order features.
2 code implementations • 1 Jan 2003 • David M. Blei, Andrew Y. Ng, Michael I. Jordan
Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities.