no code implementations • 13 Feb 2024 • Lee Cohen, Saeed Sharifi-Malvajerdi, Kevin Stangl, Ali Vakilian, Juba Ziani
We initiate the study of partial information release by the learner in strategic classification.
no code implementations • 31 Jan 2023 • Lee Cohen, Saeed Sharifi-Malvajerdi, Kevin Stangl, Ali Vakilian, Juba Ziani
We initiate the study of strategic behavior in screening processes with multiple classifiers.
no code implementations • 9 Jul 2021 • Emily Diana, Wesley Gill, Michael Kearns, Krishnaram Kenthapadi, Aaron Roth, Saeed Sharifi-Malvajerdi
The goal of the proxy is to allow a general "downstream" learner -- with minimal assumptions on their prediction task -- to be able to use the proxy to train a model that is fair with respect to the true sensitive features.
1 code implementation • NeurIPS 2021 • Varun Gupta, Christopher Jung, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi, Chris Waites
In this paper, we give a general reduction from deletion guarantees against adaptive sequences to deletion guarantees against non-adaptive sequences, using differential privacy and its connection to max information.
no code implementations • 16 Feb 2021 • Emily Diana, Wesley Gill, Ira Globus-Harris, Michael Kearns, Aaron Roth, Saeed Sharifi-Malvajerdi
We extend the notion of minimax fairness in supervised learning problems to its natural conclusion: lexicographic minimax fairness (or lexifairness for short).
2 code implementations • 6 Jul 2020 • Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi
We study the data deletion problem for convex models.
1 code implementation • 12 Jun 2020 • Emily Diana, Travis Dick, Hadi Elzayn, Michael Kearns, Aaron Roth, Zachary Schutzman, Saeed Sharifi-Malvajerdi, Juba Ziani
We consider a variation on the classical finance problem of optimal portfolio design.
no code implementations • 9 Sep 2019 • Christopher Jung, Katrina Ligett, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi, Moshe Shenfeld
This second claim follows from a thought experiment in which we imagine that the dataset is resampled from the posterior distribution after the mechanism has committed to its answers.
1 code implementation • NeurIPS 2019 • Michael Kearns, Aaron Roth, Saeed Sharifi-Malvajerdi
Given a sample of individuals and classification problems, we design an oracle-efficient algorithm (i. e. one that is given access to any standard, fairness-free learning heuristic) for the fair empirical risk minimization task.
no code implementations • 6 Dec 2018 • Matthew Jagielski, Michael Kearns, Jieming Mao, Alina Oprea, Aaron Roth, Saeed Sharifi-Malvajerdi, Jonathan Ullman
This algorithm is appealingly simple, but must be able to use protected group membership explicitly at test time, which can be viewed as a form of 'disparate treatment'.