Search Results for author: Michael P. Kim

Found 14 papers, 3 papers with code

Swap Agnostic Learning, or Characterizing Omniprediction via Multicalibration

no code implementations NeurIPS 2023 Parikshit Gopalan, Michael P. Kim, Omer Reingold

We establish an equivalence between swap variants of omniprediction and multicalibration and swap agnostic learning.

Fairness

Loss Minimization through the Lens of Outcome Indistinguishability

no code implementations16 Oct 2022 Parikshit Gopalan, Lunjia Hu, Michael P. Kim, Omer Reingold, Udi Wieder

This decomposition highlights the utility of a new multi-group fairness notion that we call calibrated multiaccuracy, which lies in between multiaccuracy and multicalibration.

Fairness

Making Decisions under Outcome Performativity

no code implementations4 Oct 2022 Michael P. Kim, Juan C. Perdomo

This performative prediction setting raises new challenges for learning "optimal" decision rules.

Is your model predicting the past?

1 code implementation23 Jun 2022 Moritz Hardt, Michael P. Kim

When does a machine learning model predict the future of individuals and when does it recite patterns that predate the individuals?

BIG-bench Machine Learning

Planting Undetectable Backdoors in Machine Learning Models

no code implementations14 Apr 2022 Shafi Goldwasser, Michael P. Kim, Vinod Vaikuntanathan, Or Zamir

Second, we demonstrate how to insert undetectable backdoors in models trained using the Random Fourier Features (RFF) learning paradigm or in Random ReLU networks.

Adversarial Robustness BIG-bench Machine Learning

Low-Degree Multicalibration

no code implementations2 Mar 2022 Parikshit Gopalan, Michael P. Kim, Mihir Singhal, Shengjia Zhao

This stringent notion -- that predictions be well-calibrated across a rich class of intersecting subpopulations -- provides its strong guarantees at a cost: the computational and sample complexity of learning multicalibrated predictors are high, and grow exponentially with the number of class labels.

Fairness

Calibrating Predictions to Decisions: A Novel Approach to Multi-Class Calibration

no code implementations NeurIPS 2021 Shengjia Zhao, Michael P. Kim, Roshni Sahoo, Tengyu Ma, Stefano Ermon

In this work, we introduce a new notion -- \emph{decision calibration} -- that requires the predicted distribution and true distribution to be ``indistinguishable'' to a set of downstream decision-makers.

Decision Making

Outcome Indistinguishability

no code implementations26 Nov 2020 Cynthia Dwork, Michael P. Kim, Omer Reingold, Guy N. Rothblum, Gal Yona

Prediction algorithms assign numbers to individuals that are popularly understood as individual "probabilities" -- what is the probability of 5-year survival after cancer diagnosis?

A Distributional Framework for Data Valuation

no code implementations ICML 2020 Amirata Ghorbani, Michael P. Kim, James Zou

Shapley value is a classic notion from game theory, historically used to quantify the contributions of individuals within groups, and more recently applied to assign values to data points when training machine learning models.

Data Valuation

Tracking and Improving Information in the Service of Fairness

no code implementations22 Apr 2019 Sumegha Garg, Michael P. Kim, Omer Reingold

As algorithmic prediction systems have become widespread, fears that these systems may inadvertently discriminate against members of underrepresented populations have grown.

Decision Making Fairness +1

Preference-Informed Fairness

no code implementations3 Apr 2019 Michael P. Kim, Aleksandra Korolova, Guy N. Rothblum, Gal Yona

We introduce and study a new notion of preference-informed individual fairness (PIIF) that is a relaxation of both individual fairness and envy-freeness.

Decision Making Fairness

Multiaccuracy: Black-Box Post-Processing for Fairness in Classification

1 code implementation31 May 2018 Michael P. Kim, Amirata Ghorbani, James Zou

Prediction systems are successfully deployed in applications ranging from disease diagnosis, to predicting credit worthiness, to image recognition.

Classification Fairness +2

Fairness Through Computationally-Bounded Awareness

no code implementations NeurIPS 2018 Michael P. Kim, Omer Reingold, Guy N. Rothblum

We study the problem of fair classification within the versatile framework of Dwork et al. [ITCS '12], which assumes the existence of a metric that measures similarity between pairs of individuals.

Fairness

Calibration for the (Computationally-Identifiable) Masses

1 code implementation22 Nov 2017 Úrsula Hébert-Johnson, Michael P. Kim, Omer Reingold, Guy N. Rothblum

We develop and study multicalbration -- a new measure of algorithmic fairness that aims to mitigate concerns about discrimination that is introduced in the process of learning a predictor from data.

Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.