Search Results for author: Joseph Futoma

Found 11 papers, 3 papers with code

Learning to Detect Sepsis with a Multitask Gaussian Process RNN Classifier

2 code implementations ICML 2017 Joseph Futoma, Sanjay Hariharan, Katherine Heller

We present a scalable end-to-end classifier that uses streaming physiological and medication data to accurately predict the onset of sepsis, a life-threatening complication from infections that has high mortality and morbidity.

Gaussian Processes Time Series +1

Model-based Reinforcement Learning for Semi-Markov Decision Processes with Neural ODEs

1 code implementation NeurIPS 2020 Jianzhun Du, Joseph Futoma, Finale Doshi-Velez

We present two elegant solutions for modeling continuous-time dynamics, in a novel model-based reinforcement learning (RL) framework for semi-Markov decision processes (SMDPs), using neural ordinary differential equations (ODEs).

Model-based Reinforcement Learning reinforcement-learning +1

An Improved Multi-Output Gaussian Process RNN with Real-Time Validation for Early Sepsis Detection

no code implementations19 Aug 2017 Joseph Futoma, Sanjay Hariharan, Mark Sendak, Nathan Brajer, Meredith Clement, Armando Bedoya, Cara O'Brien, Katherine Heller

Latent function values from the Gaussian process are then fed into a deep recurrent neural network to classify patient encounters as septic or not, and the overall model is trained end-to-end using back-propagation.

Gaussian Processes Time Series Analysis

Scalable Modeling of Multivariate Longitudinal Data for Prediction of Chronic Kidney Disease Progression

no code implementations16 Aug 2016 Joseph Futoma, Mark Sendak, C. Blake Cameron, Katherine Heller

Prediction of the future trajectory of a disease is an important challenge for personalized medicine and population health management.

Management Variational Inference

Learning to Treat Sepsis with Multi-Output Gaussian Process Deep Recurrent Q-Networks

no code implementations ICLR 2018 Joseph Futoma, Anthony Lin, Mark Sendak, Armando Bedoya, Meredith Clement, Cara O'Brien, Katherine Heller

We evaluate our approach on a heterogeneous dataset of septic spanning 15 months from our university health system, and find that our learned policy could reduce patient mortality by as much as 8. 2\% from an overall baseline mortality rate of 13. 3\%.

Gaussian Processes reinforcement-learning +3

Interpretable Off-Policy Evaluation in Reinforcement Learning by Highlighting Influential Transitions

no code implementations ICML 2020 Omer Gottesman, Joseph Futoma, Yao Liu, Sonali Parbhoo, Leo Anthony Celi, Emma Brunskill, Finale Doshi-Velez

Off-policy evaluation in reinforcement learning offers the chance of using observational data to improve future outcomes in domains such as healthcare and education, but safe deployment in high stakes settings requires ways of assessing its validity.

Off-policy evaluation reinforcement-learning

Model-based metrics: Sample-efficient estimates of predictive model subpopulation performance

no code implementations25 Apr 2021 Andrew C. Miller, Leon A. Gatys, Joseph Futoma, Emily B. Fox

We propose using an evaluation model $-$ a model that describes the conditional distribution of the predictive model score $-$ to form model-based metric (MBM) estimates.

Readmission Prediction

Label Shift Estimators for Non-Ignorable Missing Data

no code implementations27 Oct 2023 Andrew C. Miller, Joseph Futoma

We consider the problem of estimating the mean of a random variable Y subject to non-ignorable missingness, i. e., where the missingness mechanism depends on Y .

Cannot find the paper you are looking for? You can Submit a new open access paper.