Search Results for author: Daisy Yi Ding

Found 5 papers, 2 papers with code

Handling Missing Data with Graph Representation Learning

no code implementations NeurIPS 2020 Jiaxuan You, Xiaobai Ma, Daisy Yi Ding, Mykel Kochenderfer, Jure Leskovec

GRAPE tackles the missing data problem using a graph representation, where the observations and features are viewed as two types of nodes in a bipartite graph, and the observed feature values as edges.

Graph Representation Learning Imputation

NGBoost: Natural Gradient Boosting for Probabilistic Prediction

4 code implementations ICML 2020 Tony Duan, Anand Avati, Daisy Yi Ding, Khanh K. Thai, Sanjay Basu, Andrew Y. Ng, Alejandro Schuler

NGBoost generalizes gradient boosting to probabilistic regression by treating the parameters of the conditional distribution as targets for a multiparameter boosting algorithm.

Weather Forecasting

Counterfactual Reasoning for Fair Clinical Risk Prediction

no code implementations14 Jul 2019 Stephen Pfohl, Tony Duan, Daisy Yi Ding, Nigam H. Shah

We investigate the extent to which the augmented counterfactual fairness criteria may be applied to develop fair models for prolonged inpatient length of stay and mortality with observational electronic health records data.

Counterfactual Inference Decision Making +1

Learning to Summarize Radiology Findings

1 code implementation WS 2018 Yuhao Zhang, Daisy Yi Ding, Tianpei Qian, Christopher D. Manning, Curtis P. Langlotz

The Impression section of a radiology report summarizes crucial radiology findings in natural language and plays a central role in communicating these findings to physicians.

The Effectiveness of Multitask Learning for Phenotyping with Electronic Health Records Data

no code implementations9 Aug 2018 Daisy Yi Ding, Chloé Simpson, Stephen Pfohl, Dave C. Kale, Kenneth Jung, Nigam H. Shah

We present experiments that elucidate when multitask learning with neural nets improves performance for phenotyping using EHR data relative to neural nets trained for a single phenotype and to well-tuned logistic regression baselines.

Cannot find the paper you are looking for? You can Submit a new open access paper.