Search Results for author: Saeed Sharifi-Malvajerdi

Found 10 papers, 4 papers with code

Bayesian Strategic Classification

no code implementations13 Feb 2024 Lee Cohen, Saeed Sharifi-Malvajerdi, Kevin Stangl, Ali Vakilian, Juba Ziani

We initiate the study of partial information release by the learner in strategic classification.

Classification

Sequential Strategic Screening

no code implementations31 Jan 2023 Lee Cohen, Saeed Sharifi-Malvajerdi, Kevin Stangl, Ali Vakilian, Juba Ziani

We initiate the study of strategic behavior in screening processes with multiple classifiers.

Multiaccurate Proxies for Downstream Fairness

no code implementations9 Jul 2021 Emily Diana, Wesley Gill, Michael Kearns, Krishnaram Kenthapadi, Aaron Roth, Saeed Sharifi-Malvajerdi

The goal of the proxy is to allow a general "downstream" learner -- with minimal assumptions on their prediction task -- to be able to use the proxy to train a model that is fair with respect to the true sensitive features.

Fairness Generalization Bounds

Adaptive Machine Unlearning

1 code implementation NeurIPS 2021 Varun Gupta, Christopher Jung, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi, Chris Waites

In this paper, we give a general reduction from deletion guarantees against adaptive sequences to deletion guarantees against non-adaptive sequences, using differential privacy and its connection to max information.

Machine Unlearning valid

Lexicographically Fair Learning: Algorithms and Generalization

no code implementations16 Feb 2021 Emily Diana, Wesley Gill, Ira Globus-Harris, Michael Kearns, Aaron Roth, Saeed Sharifi-Malvajerdi

We extend the notion of minimax fairness in supervised learning problems to its natural conclusion: lexicographic minimax fairness (or lexifairness for short).

Fairness Generalization Bounds

A New Analysis of Differential Privacy's Generalization Guarantees

no code implementations9 Sep 2019 Christopher Jung, Katrina Ligett, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi, Moshe Shenfeld

This second claim follows from a thought experiment in which we imagine that the dataset is resampled from the posterior distribution after the mechanism has committed to its answers.

Average Individual Fairness: Algorithms, Generalization and Experiments

1 code implementation NeurIPS 2019 Michael Kearns, Aaron Roth, Saeed Sharifi-Malvajerdi

Given a sample of individuals and classification problems, we design an oracle-efficient algorithm (i. e. one that is given access to any standard, fairness-free learning heuristic) for the fair empirical risk minimization task.

Classification Fairness +1

Differentially Private Fair Learning

no code implementations6 Dec 2018 Matthew Jagielski, Michael Kearns, Jieming Mao, Alina Oprea, Aaron Roth, Saeed Sharifi-Malvajerdi, Jonathan Ullman

This algorithm is appealingly simple, but must be able to use protected group membership explicitly at test time, which can be viewed as a form of 'disparate treatment'.

Attribute Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.