Search Results for author: Samuel Yeom

Found 7 papers, 4 papers with code

Black-Box Audits for Group Distribution Shifts

no code implementations8 Sep 2022 Marc Juarez, Samuel Yeom, Matt Fredrikson

Our experimental results on real-world datasets show that this approach is effective, achieving 80--100% AUC-ROC in detecting shifts involving the underrepresentation of a demographic group in the training set.

Individual Fairness Revisited: Transferring Techniques from Adversarial Robustness

no code implementations18 Feb 2020 Samuel Yeom, Matt Fredrikson

We turn the definition of individual fairness on its head---rather than ascertaining the fairness of a model given a predetermined metric, we find a metric for a given model that satisfies individual fairness.

Adversarial Robustness Fairness

Learning Fair Representations for Kernel Models

2 code implementations27 Jun 2019 Zilong Tan, Samuel Yeom, Matt Fredrikson, Ameet Talwalkar

In contrast, we demonstrate the promise of learning a model-aware fair representation, focusing on kernel-based models.

Dimensionality Reduction Fairness

FlipTest: Fairness Testing via Optimal Transport

1 code implementation21 Jun 2019 Emily Black, Samuel Yeom, Matt Fredrikson

We present FlipTest, a black-box technique for uncovering discrimination in classifiers.

Fairness Translation

Hunting for Discriminatory Proxies in Linear Regression Models

1 code implementation NeurIPS 2018 Samuel Yeom, Anupam Datta, Matt Fredrikson

In this paper we formulate a definition of proxy use for the setting of linear regression and present algorithms for detecting proxies.

Attribute regression

Avoiding Disparity Amplification under Different Worldviews

no code implementations26 Aug 2018 Samuel Yeom, Michael Carl Tschantz

We mathematically compare four competing definitions of group-level nondiscrimination: demographic parity, equalized odds, predictive parity, and calibration.

Fairness

Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting

1 code implementation5 Sep 2017 Samuel Yeom, Irene Giacomelli, Matt Fredrikson, Somesh Jha

This paper examines the effect that overfitting and influence have on the ability of an attacker to learn information about the training data from machine learning models, either through training set membership inference or attribute inference attacks.

Attribute BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.