Search Results for author: Yindalon Aphinyanaphongs

Found 9 papers, 4 papers with code

SMM4H Shared Task 2020 - A Hybrid Pipeline for Identifying Prescription Drug Abuse from Twitter: Machine Learning, Deep Learning, and Post-Processing

no code implementations SMM4H (COLING) 2020 Isabel Metzger, Emir Y. Haskovic, Allison Black, Whitley M. Yi, Rajat S. Chandra, Mark T. Rutledge, William McMahon, Yindalon Aphinyanaphongs

This paper presents our approach to multi-class text categorization of tweets mentioning prescription medications as being indicative of potential abuse/misuse (A), consumption/non-abuse (C), mention-only (M), or an unrelated reference (U) using natural language processing techniques.

Data Augmentation Text Categorization +1

A dynamic risk score for early prediction of cardiogenic shock using machine learning

no code implementations22 Mar 2023 Yuxuan Hu, Albert Lui, Mark Goldstein, Mukund Sudarshan, Andrea Tinsay, Cindy Tsui, Samuel Maidman, John Medamana, Neil Jethani, Aahlad Puli, Vuthy Nguy, Yindalon Aphinyanaphongs, Nicholas Kiefer, Nathaniel Smilowitz, James Horowitz, Tania Ahuja, Glenn I Fishman, Judith Hochman, Stuart Katz, Samuel Bernard, Rajesh Ranganath

We developed a deep learning-based risk stratification tool, called CShock, for patients admitted into the cardiac ICU with acute decompensated heart failure and/or myocardial infarction to predict onset of cardiogenic shock.

New-Onset Diabetes Assessment Using Artificial Intelligence-Enhanced Electrocardiography

no code implementations5 May 2022 Neil Jethani, Aahlad Puli, Hao Zhang, Leonid Garber, Lior Jankelson, Yindalon Aphinyanaphongs, Rajesh Ranganath

We found ECG-based assessment outperforms the ADA Risk test, achieving a higher area under the curve (0. 80 vs. 0. 68) and positive predictive value (13% vs. 9%) -- 2. 6 times the prevalence of diabetes in the cohort.

Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their Interpretations

1 code implementation2 Mar 2021 Neil Jethani, Mukund Sudarshan, Yindalon Aphinyanaphongs, Rajesh Ranganath

While the need for interpretable machine learning has been established, many common approaches are slow, lack fidelity, or hard to evaluate.

Interpretable Machine Learning

COVID-19 Prognosis via Self-Supervised Representation Learning and Multi-Image Prediction

1 code implementation13 Jan 2021 Anuroop Sriram, Matthew Muckley, Koustuv Sinha, Farah Shamout, Joelle Pineau, Krzysztof J. Geras, Lea Azour, Yindalon Aphinyanaphongs, Nafissa Yakubova, William Moore

The first is deterioration prediction from a single image, where our model achieves an area under receiver operating characteristic curve (AUC) of 0. 742 for predicting an adverse event within 96 hours (compared to 0. 703 with supervised pretraining) and an AUC of 0. 765 for predicting oxygen requirements greater than 6 L a day at 24 hours (compared to 0. 749 with supervised pretraining).

Representation Learning Self-Supervised Learning

An artificial intelligence system for predicting the deterioration of COVID-19 patients in the emergency department

1 code implementation4 Aug 2020 Farah E. Shamout, Yiqiu Shen, Nan Wu, Aakash Kaku, Jungkyu Park, Taro Makino, Stanisław Jastrzębski, Duo Wang, Ben Zhang, Siddhant Dogra, Meng Cao, Narges Razavian, David Kudlowitz, Lea Azour, William Moore, Yvonne W. Lui, Yindalon Aphinyanaphongs, Carlos Fernandez-Granda, Krzysztof J. Geras

In order to verify performance in a real clinical setting, we silently deployed a preliminary version of the deep neural network at New York University Langone Health during the first wave of the pandemic, which produced accurate predictions in real-time.

COVID-19 Diagnosis Decision Making +1

Assessment of Amazon Comprehend Medical: Medication Information Extraction

no code implementations2 Feb 2020 Benedict Guzman, Isabel Metzger, MS, Yindalon Aphinyanaphongs, M. D., Ph. D., Himanshu Grover, Ph. D

To further establish the generalizability of its medication extraction performance, a set of random internal clinical text notes from NYU Langone Medical Center were also included in this work.


A Workflow for Visual Diagnostics of Binary Classifiers using Instance-Level Explanations

1 code implementation4 May 2017 Josua Krause, Aritra Dasgupta, Jordan Swartz, Yindalon Aphinyanaphongs, Enrico Bertini

Human-in-the-loop data analysis applications necessitate greater transparency in machine learning models for experts to understand and trust their decisions.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.