no code implementations • 9 Jun 2022 • Yahav Bechavod, Aaron Roth
We consider an online learning problem with one-sided feedback, in which the learner is able to observe the true label only for positively predicted instances.
1 code implementation • 2 Jun 2022 • Osbert Bastani, Varun Gupta, Christopher Jung, Georgy Noarov, Ramya Ramalingam, Aaron Roth
It is computationally lightweight -- comparable to split conformal prediction -- but does not require having a held-out validation set, and so all data can be used for training models from which to derive a conformal score.
no code implementations • CVPR 2022 • Aditya Golatkar, Alessandro Achille, Yu-Xiang Wang, Aaron Roth, Michael Kearns, Stefano Soatto
AdaMix incorporates few-shot training, or cross-modal zero-shot learning, on public data prior to private fine-tuning, to improve the trade-off.
no code implementations • 25 Jan 2022 • Ira Globus-Harris, Michael Kearns, Aaron Roth
We propose and analyze an algorithmic framework for "bias bounties": events in which external participants are invited to propose improvements to a trained model, akin to bug bounty events in software and security.
no code implementations • 9 Aug 2021 • Georgy Noarov, Mallesh Pai, Aaron Roth
The learner and the adversary then play in this game.
no code implementations • 9 Jul 2021 • Emily Diana, Wesley Gill, Michael Kearns, Krishnaram Kenthapadi, Aaron Roth, Saeed Sharifi-Malvajerdi
The goal of the proxy is to allow a general "downstream" learner -- with minimal assumptions on their prediction task -- to be able to use the proxy to train a model that is fair with respect to the true sensitive features.
1 code implementation • NeurIPS 2021 • Varun Gupta, Christopher Jung, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi, Chris Waites
In this paper, we give a general reduction from deletion guarantees against adaptive sequences to deletion guarantees against non-adaptive sequences, using differential privacy and its connection to max information.
no code implementations • 5 Apr 2021 • Jinshuo Dong, Aaron Roth, Weijie J. Su
In this rejoinder, we aim to address two broad issues that cover most comments made in the discussion.
1 code implementation • 11 Mar 2021 • Sergul Aydore, William Brown, Michael Kearns, Krishnaram Kenthapadi, Luca Melis, Aaron Roth, Ankit Siva
We propose, implement, and evaluate a new algorithm for releasing answers to very large numbers of statistical queries like $k$-way marginals, subject to differential privacy.
no code implementations • 16 Feb 2021 • Emily Diana, Wesley Gill, Ira Globus-Harris, Michael Kearns, Aaron Roth, Saeed Sharifi-Malvajerdi
We extend the notion of minimax fairness in supervised learning problems to its natural conclusion: lexicographic minimax fairness (or lexifairness for short).
no code implementations • 5 Jan 2021 • Varun Gupta, Christopher Jung, Georgy Noarov, Mallesh M. Pai, Aaron Roth
We present a general, efficient technique for providing contextual predictions that are "multivalid" in various senses, against an online sequence of adversarially chosen examples $(x, y)$.
1 code implementation • 5 Nov 2020 • Emily Diana, Wesley Gill, Michael Kearns, Krishnaram Kenthapadi, Aaron Roth
We consider a recently introduced framework in which fairness is measured by worst-case outcomes across groups, rather than by the more standard differences between group outcomes.
no code implementations • 18 Aug 2020 • Christopher Jung, Changhwa Lee, Mallesh M. Pai, Aaron Roth, Rakesh Vohra
We show how to achieve the notion of "multicalibration" from H\'ebert-Johnson et al. [2018] not just for means, but also for variances and other higher moments.
2 code implementations • 6 Jul 2020 • Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi
We study the data deletion problem for convex models.
1 code implementation • 12 Jun 2020 • Emily Diana, Travis Dick, Hadi Elzayn, Michael Kearns, Aaron Roth, Zachary Schutzman, Saeed Sharifi-Malvajerdi, Juba Ziani
We consider a variation on the classical finance problem of optimal portfolio design.
no code implementations • 18 Feb 2020 • Christopher Jung, Sampath Kannan, Changhwa Lee, Mallesh M. Pai, Aaron Roth, Rakesh Vohra
There is increasing regulatory interest in whether machine learning algorithms deployed in consequential domains (e. g. in criminal justice) treat different demographic groups "fairly."
no code implementations • 16 Feb 2020 • Eshwar Ram Arunachaleswaran, Sampath Kannan, Aaron Roth, Juba Ziani
We consider two objectives: social welfare maximization, and a fairness-motivated maximin objective that seeks to maximize the value to the population (starting node) with the \emph{least} expected value.
no code implementations • 12 Dec 2019 • Emily Diana, Michael Kearns, Seth Neel, Aaron Roth
We consider a fundamental dynamic allocation problem motivated by the problem of $\textit{securities lending}$ in financial markets, the mechanism underlying the short selling of stocks.
no code implementations • 9 Sep 2019 • Christopher Jung, Katrina Ligett, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi, Moshe Shenfeld
This second claim follows from a thought experiment in which we imagine that the dataset is resampled from the posterior distribution after the mechanism has committed to its answers.
1 code implementation • ICML 2020 • Seth Neel, Aaron Roth, Giuseppe Vietri, Zhiwei Steven Wu
We find that for the problem of learning linear classifiers, directly optimizing for 0/1 loss using our approach can out-perform the more standard approach of privately optimizing a convex-surrogate loss function on the Adult dataset.
no code implementations • 1 Jul 2019 • Matthew Joseph, Jieming Mao, Aaron Roth
We prove a general connection between the communication complexity of two-player games and the sample complexity of their multi-player locally private analogues.
1 code implementation • 21 Jun 2019 • Ryan Rogers, Aaron Roth, Adam Smith, Nathan Srebro, Om Thakkar, Blake Woodworth
We design a general framework for answering adaptive statistical queries that focuses on providing explicit confidence intervals along with point estimates.
no code implementations • 29 May 2019 • Hengchu Zhang, Edo Roth, Andreas Haeberlen, Benjamin C. Pierce, Aaron Roth
Curators of sensitive datasets sometimes need to know whether queries against the data are differentially private [Dwork et al. 2006].
Programming Languages Logic in Computer Science
1 code implementation • NeurIPS 2019 • Michael Kearns, Aaron Roth, Saeed Sharifi-Malvajerdi
Given a sample of individuals and classification problems, we design an oracle-efficient algorithm (i. e. one that is given access to any standard, fairness-free learning heuristic) for the fair empirical risk minimization task.
1 code implementation • 25 May 2019 • Christopher Jung, Michael Kearns, Seth Neel, Aaron Roth, Logan Stapleton, Zhiwei Steven Wu
We consider settings in which the right notion of fairness is not captured by simple mathematical definitions (such as equality of error rates across groups), but might be more complex and nuanced and thus require elicitation from individual or collective stakeholders.
3 code implementations • 7 May 2019 • Jinshuo Dong, Aaron Roth, Weijie J. Su
More precisely, the privacy guarantees of \emph{any} hypothesis testing based definition of privacy (including original DP) converges to GDP in the limit under composition.
no code implementations • 7 Apr 2019 • Matthew Joseph, Jieming Mao, Seth Neel, Aaron Roth
Next, we show that our reduction is tight by exhibiting a family of problems such that for any $k$, there is a fully interactive $k$-compositional protocol which solves the problem, while no sequentially interactive protocol can solve the problem without at least an $\tilde \Omega(k)$ factor more examples.
1 code implementation • NeurIPS 2019 • Yahav Bechavod, Katrina Ligett, Aaron Roth, Bo Waggoner, Zhiwei Steven Wu
We study an online classification problem with partial feedback in which individuals arrive one at a time from a fixed but unknown distribution, and must be classified as positive or negative.
no code implementations • 6 Dec 2018 • Matthew Jagielski, Michael Kearns, Jieming Mao, Alina Oprea, Aaron Roth, Saeed Sharifi-Malvajerdi, Jonathan Ullman
This algorithm is appealingly simple, but must be able to use protected group membership explicitly at test time, which can be viewed as a form of 'disparate treatment'.
no code implementations • 19 Nov 2018 • Seth Neel, Aaron Roth, Zhiwei Steven Wu
We show that there is an efficient algorithm for privately constructing synthetic data for any such class, given a non-private learning oracle.
no code implementations • 20 Oct 2018 • Alexandra Chouldechova, Aaron Roth
The last few years have seen an explosion of academic and popular interest in algorithmic fairness.
no code implementations • 30 Aug 2018 • Hadi Elzayn, Shahin Jabbari, Christopher Jung, Michael Kearns, Seth Neel, Aaron Roth, Zachary Schutzman
We formalize this fairness notion for allocation problems and investigate its algorithmic consequences.
no code implementations • 27 Aug 2018 • Sampath Kannan, Aaron Roth, Juba Ziani
We show that both goals can be achieved when the college does not report grades.
5 code implementations • 24 Aug 2018 • Michael Kearns, Seth Neel, Aaron Roth, Zhiwei Steven Wu
In this paper, we undertake an extensive empirical evaluation of the algorithm of Kearns et al. On four real datasets for which fairness is a concern, we investigate the basic convergence of the algorithm when instantiated with fast heuristics in place of learning oracles, measure the tradeoffs between fairness and accuracy, and compare this approach with the recent algorithm of Agarwal et al. [2018], which implements weaker and more traditional marginal fairness constraints defined by individual protected attributes.
no code implementations • ICML 2018 • Seth Neel, Aaron Roth
Data that is gathered adaptively --- via bandit algorithms, for example --- exhibits bias.
no code implementations • NeurIPS 2018 • Stephen Gillen, Christopher Jung, Michael Kearns, Aaron Roth
We consider the problem of online learning in the linear contextual bandits setting, but in which there are also strong individual fairness constraints governed by an unknown similarity metric.
no code implementations • NeurIPS 2018 • Matthew Joseph, Aaron Roth, Jonathan Ullman, Bo Waggoner
Moreover, existing techniques to mitigate this effect do not apply in the "local model" of differential privacy that these systems use.
no code implementations • NeurIPS 2018 • Sampath Kannan, Jamie Morgenstern, Aaron Roth, Bo Waggoner, Zhiwei Steven Wu
Bandit learning is characterized by the tension between long-term exploration and short-term exploitation.
no code implementations • NeurIPS 2017 • Katrina Ligett, Seth Neel, Aaron Roth, Bo Waggoner, Steven Z. Wu
Traditional approaches to differential privacy assume a fixed privacy requirement ε for a computation, and attempt to maximize the accuracy of the computation subject to the privacy constraint.
5 code implementations • ICML 2018 • Michael Kearns, Seth Neel, Aaron Roth, Zhiwei Steven Wu
We prove that the computational problem of auditing subgroup fairness for both equality of false positive rates and statistical parity is equivalent to the problem of weak agnostic learning, which means it is computationally hard in the worst case, even for simple structured subclasses.
no code implementations • 22 Oct 2017 • Jinshuo Dong, Aaron Roth, Zachary Schutzman, Bo Waggoner, Zhiwei Steven Wu
We study an online linear classification problem, in which the data is generated by strategic agents who manipulate their features in an effort to change the classification outcome.
no code implementations • ICML 2017 • Michael Kearns, Aaron Roth, Zhiwei Steven Wu
We consider the problem of selecting a strong pool of individuals from several populations with incomparable skills (e. g. soccer players, mathematicians, and singers) in a fair manner.
1 code implementation • 7 Jun 2017 • Richard Berk, Hoda Heidari, Shahin Jabbari, Matthew Joseph, Michael Kearns, Jamie Morgenstern, Seth Neel, Aaron Roth
We introduce a flexible family of fairness regularizers for (linear and logistic) regression problems.
1 code implementation • 30 May 2017 • Katrina Ligett, Seth Neel, Aaron Roth, Bo Waggoner, Z. Steven Wu
Traditional approaches to differential privacy assume a fixed privacy requirement $\epsilon$ for a computation, and attempt to maximize the accuracy of the computation subject to the privacy constraint.
no code implementations • 27 Mar 2017 • Richard Berk, Hoda Heidari, Shahin Jabbari, Michael Kearns, Aaron Roth
Methods: We draw on the existing literatures in criminology, computer science and statistics to provide an integrated examination of fairness and accuracy in criminal justice risk assessments.
no code implementations • ICML 2017 • Shahin Jabbari, Matthew Joseph, Michael Kearns, Jamie Morgenstern, Aaron Roth
We initiate the study of fairness in reinforcement learning, where the actions of a learning algorithm may affect its environment and future rewards.
no code implementations • 29 Oct 2016 • Matthew Joseph, Michael Kearns, Jamie Morgenstern, Seth Neel, Aaron Roth
We study fairness in linear bandit problems.
no code implementations • 19 Jul 2016 • Aaron Roth, Aleksandrs Slivkins, Jonathan Ullman, Zhiwei Steven Wu
We are able to apply this technique to the setting of unit demand buyers despite the fact that in that setting the goods are not divisible, and the natural fractional relaxation of a unit demand valuation is not strongly concave.
no code implementations • NeurIPS 2016 • Matthew Joseph, Michael Kearns, Jamie Morgenstern, Aaron Roth
This tight connection allows us to provide a provably fair algorithm for the linear contextual bandit problem with a polynomial dependence on the dimension, and to show (for a different class of functions) a worst-case exponential gap in regret between fair and non-fair learning algorithms
no code implementations • 13 Apr 2016 • Ryan Rogers, Aaron Roth, Adam Smith, Om Thakkar
In this paper, we initiate a principled study of how the generalization properties of approximate differential privacy can be used to perform adaptive hypothesis testing, while giving statistically valid $p$-value corrections.
no code implementations • 24 Feb 2016 • Rachel Cummings, Katrina Ligett, Kobbi Nissim, Aaron Roth, Zhiwei Steven Wu
We also show that perfect generalization is a strictly stronger guarantee than differential privacy, but that, nevertheless, many learning tasks can be carried out subject to the guarantees of perfect generalization.
no code implementations • 3 Nov 2015 • Justin Hsu, Jamie Morgenstern, Ryan Rogers, Aaron Roth, Rakesh Vohra
Second, we provide learning-theoretic results that show that such prices are robust to changing the buyers in the market, so long as all buyers are sampled from the same (unknown) distribution.
1 code implementation • NeurIPS 2015 • Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toniann Pitassi, Omer Reingold, Aaron Roth
We also formalize and address the general problem of data reuse in adaptive data analysis.
no code implementations • NeurIPS 2016 • Shahin Jabbari, Ryan Rogers, Aaron Roth, Zhiwei Steven Wu
This models the problem of predicting the behavior of a rational agent whose goals are known, but whose resources are unknown.
no code implementations • 4 Apr 2015 • Aaron Roth, Jonathan Ullman, Zhiwei Steven Wu
In this paper we present an approach to solving for the leader's optimal strategy in certain Stackelberg games where the follower's utility function (and thus the subsequent best response of the follower) is unknown.
1 code implementation • 13 Feb 2015 • Gilles Barthe, Marco Gaboardi, Emilio Jesús Gallego Arias, Justin Hsu, Aaron Roth, Pierre-Yves Strub
To address both concerns, we explore techniques from computer-aided verification to construct formal proofs of incentive properties.
Computer Science and Game Theory Logic in Computer Science
no code implementations • 10 Nov 2014 • Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toniann Pitassi, Omer Reingold, Aaron Roth
We show that, surprisingly, there is a way to estimate an exponential in $n$ number of expectations accurately even if the functions are chosen adaptively.
no code implementations • 27 Jul 2014 • Kareem Amin, Rachel Cummings, Lili Dworkin, Michael Kearns, Aaron Roth
We consider the problem of learning from revealed preferences in an online setting.
1 code implementation • 25 Jul 2014 • Gilles Barthe, Marco Gaboardi, Emilio Jesús Gallego Arias, Justin Hsu, Aaron Roth, Pierre-Yves Strub
Unlike typical programmatic properties, it is not sufficient for algorithms to merely satisfy the property---incentive properties are only useful if the strategic agents also believe this fact.
Programming Languages Computer Science and Game Theory
no code implementations • 15 Feb 2014 • Justin Hsu, Aaron Roth, Tim Roughgarden, Jonathan Ullman
In this paper, we initiate the systematic study of solving linear programs under differential privacy.
no code implementations • 6 Feb 2014 • Marco Gaboardi, Emilio Jesús Gallego Arias, Justin Hsu, Aaron Roth, Zhiwei Steven Wu
We present a practical, differentially private algorithm for answering a large number of queries on high dimensional datasets.
no code implementations • 26 Mar 2009 • Anupam Gupta, Katrina Ligett, Frank McSherry, Aaron Roth, Kunal Talwar
Is it even possible to design good algorithms for this problem that preserve the privacy of the clients?
Data Structures and Algorithms Cryptography and Security Computer Science and Game Theory