no code implementations • 8 Sep 2022 • Marc Juarez, Samuel Yeom, Matt Fredrikson
Our experimental results on real-world datasets show that this approach is effective, achieving 80--100% AUC-ROC in detecting shifts involving the underrepresentation of a demographic group in the training set.
no code implementations • 18 Feb 2020 • Samuel Yeom, Matt Fredrikson
We turn the definition of individual fairness on its head---rather than ascertaining the fairness of a model given a predetermined metric, we find a metric for a given model that satisfies individual fairness.
2 code implementations • 27 Jun 2019 • Zilong Tan, Samuel Yeom, Matt Fredrikson, Ameet Talwalkar
In contrast, we demonstrate the promise of learning a model-aware fair representation, focusing on kernel-based models.
1 code implementation • 21 Jun 2019 • Emily Black, Samuel Yeom, Matt Fredrikson
We present FlipTest, a black-box technique for uncovering discrimination in classifiers.
1 code implementation • NeurIPS 2018 • Samuel Yeom, Anupam Datta, Matt Fredrikson
In this paper we formulate a definition of proxy use for the setting of linear regression and present algorithms for detecting proxies.
no code implementations • 26 Aug 2018 • Samuel Yeom, Michael Carl Tschantz
We mathematically compare four competing definitions of group-level nondiscrimination: demographic parity, equalized odds, predictive parity, and calibration.
1 code implementation • 5 Sep 2017 • Samuel Yeom, Irene Giacomelli, Matt Fredrikson, Somesh Jha
This paper examines the effect that overfitting and influence have on the ability of an attacker to learn information about the training data from machine learning models, either through training set membership inference or attribute inference attacks.