no code implementations • 22 Feb 2024 • Jean Feng, Harvineet Singh, Fan Xia, Adarsh Subbaswamy, Alexej Gossmann
Machine learning (ML) algorithms can often differ in performance across domains.
no code implementations • 20 Nov 2023 • Jean Feng, Adarsh Subbaswamy, Alexej Gossmann, Harvineet Singh, Berkman Sahiner, Mi-Ok Kim, Gene Pennello, Nicholas Petrick, Romain Pirracchio, Fan Xia
When an ML algorithm interacts with its environment, the algorithm can affect the data-generating mechanism and be a major source of bias when evaluating its standalone performance, an issue known as performativity.
no code implementations • 28 Nov 2022 • Antonio Parziale, Monica Agrawal, Shalmali Joshi, Irene Y. Chen, Shengpu Tang, Luis Oala, Adarsh Subbaswamy
A collection of the extended abstracts that were presented at the 2nd Machine Learning for Health symposium (ML4H 2022), which was held both virtually and in person on November 28, 2022, in New Orleans, Louisiana, USA.
no code implementations • 28 Oct 2020 • Adarsh Subbaswamy, Roy Adams, Suchi Saria
We consider shifts in user defined conditional distributions, allowing some distributions to shift while keeping other portions of the data distribution fixed.
no code implementations • 20 Feb 2020 • Adarsh Subbaswamy, Suchi Saria
However, these approaches assume that the data generating process is known in the form of a full causal graph, which is generally not the case.
no code implementations • 27 May 2019 • Adarsh Subbaswamy, Bryant Chen, Suchi Saria
Recent interest in the external validity of prediction models (i. e., the problem of different train and test distributions, known as dataset shift) has produced many methods for finding predictive distributions that are invariant to dataset shifts and can be used for prediction in new, unseen environments.
no code implementations • 15 Apr 2019 • Suchi Saria, Adarsh Subbaswamy
This document serves as a brief overview of the "Safe and Reliable Machine Learning" tutorial given at the 2019 ACM Conference on Fairness, Accountability, and Transparency (FAT* 2019).
no code implementations • 11 Dec 2018 • Adarsh Subbaswamy, Peter Schulam, Suchi Saria
Classical supervised learning produces unreliable models when training and target distributions differ, with most existing solutions requiring samples from the target domain.
no code implementations • 9 Aug 2018 • Adarsh Subbaswamy, Suchi Saria
Predictive models can fail to generalize from training to deployment environments because of dataset shift, posing a threat to model reliability and the safety of downstream decisions made in practice.
no code implementations • 6 Apr 2017 • Hossein Soleimani, Adarsh Subbaswamy, Suchi Saria
Treatment effects can be estimated from observational data as the difference in potential outcomes.