Search Results for author: Adarsh Subbaswamy

Found 10 papers, 0 papers with code

Designing monitoring strategies for deployed machine learning algorithms: navigating performativity through a causal lens

no code implementations20 Nov 2023 Jean Feng, Adarsh Subbaswamy, Alexej Gossmann, Harvineet Singh, Berkman Sahiner, Mi-Ok Kim, Gene Pennello, Nicholas Petrick, Romain Pirracchio, Fan Xia

When an ML algorithm interacts with its environment, the algorithm can affect the data-generating mechanism and be a major source of bias when evaluating its standalone performance, an issue known as performativity.

Causal Inference Ethics

Machine Learning for Health symposium 2022 -- Extended Abstract track

no code implementations28 Nov 2022 Antonio Parziale, Monica Agrawal, Shalmali Joshi, Irene Y. Chen, Shengpu Tang, Luis Oala, Adarsh Subbaswamy

A collection of the extended abstracts that were presented at the 2nd Machine Learning for Health symposium (ML4H 2022), which was held both virtually and in person on November 28, 2022, in New Orleans, Louisiana, USA.

Evaluating Model Robustness and Stability to Dataset Shift

no code implementations28 Oct 2020 Adarsh Subbaswamy, Roy Adams, Suchi Saria

We consider shifts in user defined conditional distributions, allowing some distributions to shift while keeping other portions of the data distribution fixed.

BIG-bench Machine Learning

I-SPEC: An End-to-End Framework for Learning Transportable, Shift-Stable Models

no code implementations20 Feb 2020 Adarsh Subbaswamy, Suchi Saria

However, these approaches assume that the data generating process is known in the form of a full causal graph, which is generally not the case.

Mortality Prediction

A Unifying Causal Framework for Analyzing Dataset Shift-stable Learning Algorithms

no code implementations27 May 2019 Adarsh Subbaswamy, Bryant Chen, Suchi Saria

Recent interest in the external validity of prediction models (i. e., the problem of different train and test distributions, known as dataset shift) has produced many methods for finding predictive distributions that are invariant to dataset shifts and can be used for prediction in new, unseen environments.

Tutorial: Safe and Reliable Machine Learning

no code implementations15 Apr 2019 Suchi Saria, Adarsh Subbaswamy

This document serves as a brief overview of the "Safe and Reliable Machine Learning" tutorial given at the 2019 ACM Conference on Fairness, Accountability, and Transparency (FAT* 2019).

BIG-bench Machine Learning Fairness

Preventing Failures Due to Dataset Shift: Learning Predictive Models That Transport

no code implementations11 Dec 2018 Adarsh Subbaswamy, Peter Schulam, Suchi Saria

Classical supervised learning produces unreliable models when training and target distributions differ, with most existing solutions requiring samples from the target domain.

Counterfactual Normalization: Proactively Addressing Dataset Shift and Improving Reliability Using Causal Mechanisms

no code implementations9 Aug 2018 Adarsh Subbaswamy, Suchi Saria

Predictive models can fail to generalize from training to deployment environments because of dataset shift, posing a threat to model reliability and the safety of downstream decisions made in practice.

counterfactual

Cannot find the paper you are looking for? You can Submit a new open access paper.