1 code implementation • 7 Dec 2023 • Harvineet Singh, Fan Xia, Mi-Ok Kim, Romain Pirracchio, Rumi Chunara, Jean Feng
In fairness audits, a standard objective is to detect whether a given algorithm performs substantially differently between subgroups.
no code implementations • 20 Nov 2023 • Jean Feng, Adarsh Subbaswamy, Alexej Gossmann, Harvineet Singh, Berkman Sahiner, Mi-Ok Kim, Gene Pennello, Nicholas Petrick, Romain Pirracchio, Fan Xia
When an ML algorithm interacts with its environment, the algorithm can affect the data-generating mechanism and be a major source of bias when evaluating its standalone performance, an issue known as performativity.
1 code implementation • 28 Jul 2023 • Jean Feng, Alexej Gossmann, Romain Pirracchio, Nicholas Petrick, Gene Pennello, Berkman Sahiner
In a well-calibrated risk prediction model, the average predicted probability is close to the true event rate for any given subgroup.
no code implementations • 27 Jan 2023 • Ivana Malenica, Rachael V. Phillips, Daniel Lazzareschi, Jeremy R. Coyle, Romain Pirracchio, Mark J. Van Der Laan
We propose a novel, fully nonparametric approach for the multi-task learning, the Multi-task Highly Adaptive Lasso (MT-HAL).
1 code implementation • 17 Nov 2022 • Jean Feng, Alexej Gossmann, Gene Pennello, Nicholas Petrick, Berkman Sahiner, Romain Pirracchio
Performance monitoring of machine learning (ML)-based risk prediction models in healthcare is complicated by the issue of confounding medical interventions (CMI): when an algorithm predicts a patient to be at high risk for an adverse event, clinicians are more likely to administer prophylactic treatment and alter the very target that the algorithm aims to predict.
no code implementations • 21 Mar 2022 • Jean Feng, Gene Pennello, Nicholas Petrick, Berkman Sahiner, Romain Pirracchio, Alexej Gossmann
Each modification introduces a risk of deteriorating performance and must be validated on a test dataset.
1 code implementation • 13 Oct 2021 • Jean Feng, Alexej Gossmann, Berkman Sahiner, Romain Pirracchio
In the COPD study, BLR and MarBLR dynamically combined the original model with a continually-refitted gradient boosted tree to achieve aAUCs of 0. 924 (95%CI 0. 913-0. 935) and 0. 925 (95%CI 0. 914-0. 935), compared to the static model's aAUC of 0. 904 (95%CI 0. 892-0. 916).
no code implementations • 21 Sep 2021 • Ivana Malenica, Rachael V. Phillips, Romain Pirracchio, Antoine Chambaz, Alan Hubbard, Mark J. Van Der Laan
In this work, we introduce the Personalized Online Super Learner (POSL) -- an online ensembling algorithm for streaming data whose optimization procedure accommodates varying degrees of personalization.