2 code implementations • 6 Mar 2023 • Shuai Tang, Sergul Aydore, Michael Kearns, Saeyoung Rho, Aaron Roth, Yichen Wang, Yu-Xiang Wang, Zhiwei Steven Wu
We revisit the problem of differentially private squared error linear regression.
1 code implementation • 30 May 2017 • Katrina Ligett, Seth Neel, Aaron Roth, Bo Waggoner, Z. Steven Wu
Traditional approaches to differential privacy assume a fixed privacy requirement $\epsilon$ for a computation, and attempt to maximize the accuracy of the computation subject to the privacy constraint.
3 code implementations • 7 May 2019 • Jinshuo Dong, Aaron Roth, Weijie J. Su
More precisely, the privacy guarantees of \emph{any} hypothesis testing based definition of privacy (including original DP) converges to GDP in the limit under composition.
5 code implementations • ICML 2018 • Michael Kearns, Seth Neel, Aaron Roth, Zhiwei Steven Wu
We prove that the computational problem of auditing subgroup fairness for both equality of false positive rates and statistical parity is equivalent to the problem of weak agnostic learning, which means it is computationally hard in the worst case, even for simple structured subclasses.
5 code implementations • 24 Aug 2018 • Michael Kearns, Seth Neel, Aaron Roth, Zhiwei Steven Wu
In this paper, we undertake an extensive empirical evaluation of the algorithm of Kearns et al. On four real datasets for which fairness is a concern, we investigate the basic convergence of the algorithm when instantiated with fast heuristics in place of learning oracles, measure the tradeoffs between fairness and accuracy, and compare this approach with the recent algorithm of Agarwal et al. [2018], which implements weaker and more traditional marginal fairness constraints defined by individual protected attributes.
1 code implementation • NeurIPS 2021 • Varun Gupta, Christopher Jung, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi, Chris Waites
In this paper, we give a general reduction from deletion guarantees against adaptive sequences to deletion guarantees against non-adaptive sequences, using differential privacy and its connection to max information.
1 code implementation • 11 Mar 2021 • Sergul Aydore, William Brown, Michael Kearns, Krishnaram Kenthapadi, Luca Melis, Aaron Roth, Ankit Siva
We propose, implement, and evaluate a new algorithm for releasing answers to very large numbers of statistical queries like $k$-way marginals, subject to differential privacy.
1 code implementation • 2 Jun 2022 • Osbert Bastani, Varun Gupta, Christopher Jung, Georgy Noarov, Ramya Ramalingam, Aaron Roth
It is computationally lightweight -- comparable to split conformal prediction -- but does not require having a held-out validation set, and so all data can be used for training models from which to derive a conformal score.
1 code implementation • 30 Sep 2022 • Christopher Jung, Georgy Noarov, Ramya Ramalingam, Aaron Roth
Multivalid coverage guarantees are stronger than marginal coverage guarantees in two ways: (1) They hold even conditional on group membership -- that is, the target coverage level $1-\alpha$ holds conditionally on membership in each of an arbitrary (potentially intersecting) group in a finite collection $\mathcal{G}$ of regions in the feature space.
1 code implementation • 6 Nov 2022 • Travis Dick, Cynthia Dwork, Michael Kearns, Terrance Liu, Aaron Roth, Giuseppe Vietri, Zhiwei Steven Wu
Our attacks significantly outperform those that are based only on access to a public distribution or population from which the private dataset $D$ was sampled, demonstrating that they are exploiting information in the aggregate statistics $Q(D)$, and not simply the overall structure of the distribution.
2 code implementations • 6 Jul 2020 • Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi
We study the data deletion problem for convex models.
1 code implementation • 5 Nov 2020 • Emily Diana, Wesley Gill, Michael Kearns, Krishnaram Kenthapadi, Aaron Roth
We consider a recently introduced framework in which fairness is measured by worst-case outcomes across groups, rather than by the more standard differences between group outcomes.
1 code implementation • 25 Jul 2014 • Gilles Barthe, Marco Gaboardi, Emilio Jesús Gallego Arias, Justin Hsu, Aaron Roth, Pierre-Yves Strub
Unlike typical programmatic properties, it is not sufficient for algorithms to merely satisfy the property---incentive properties are only useful if the strategic agents also believe this fact.
Programming Languages Computer Science and Game Theory
1 code implementation • 13 Feb 2015 • Gilles Barthe, Marco Gaboardi, Emilio Jesús Gallego Arias, Justin Hsu, Aaron Roth, Pierre-Yves Strub
To address both concerns, we explore techniques from computer-aided verification to construct formal proofs of incentive properties.
Computer Science and Game Theory Logic in Computer Science
1 code implementation • 31 Jan 2023 • Ira Globus-Harris, Declan Harrison, Michael Kearns, Aaron Roth, Jessica Sorrell
Using this characterization, we give an exceedingly simple algorithm that can be analyzed both as a boosting algorithm for regression and as a multicalibration algorithm for a class H that makes use only of a standard squared error regression oracle for H. We give a weak learning assumption on H that ensures convergence to Bayes optimality without the need to make any realizability assumptions -- giving us an agnostic boosting algorithm for regression.
1 code implementation • ICML 2020 • Seth Neel, Aaron Roth, Giuseppe Vietri, Zhiwei Steven Wu
We find that for the problem of learning linear classifiers, directly optimizing for 0/1 loss using our approach can out-perform the more standard approach of privately optimizing a convex-surrogate loss function on the Adult dataset.
1 code implementation • NeurIPS 2015 • Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toniann Pitassi, Omer Reingold, Aaron Roth
We also formalize and address the general problem of data reuse in adaptive data analysis.
1 code implementation • 12 Jun 2020 • Emily Diana, Travis Dick, Hadi Elzayn, Michael Kearns, Aaron Roth, Zachary Schutzman, Saeed Sharifi-Malvajerdi, Juba Ziani
We consider a variation on the classical finance problem of optimal portfolio design.
1 code implementation • 25 May 2019 • Christopher Jung, Michael Kearns, Seth Neel, Aaron Roth, Logan Stapleton, Zhiwei Steven Wu
We consider settings in which the right notion of fairness is not captured by simple mathematical definitions (such as equality of error rates across groups), but might be more complex and nuanced and thus require elicitation from individual or collective stakeholders.
1 code implementation • 21 Jun 2019 • Ryan Rogers, Aaron Roth, Adam Smith, Nathan Srebro, Om Thakkar, Blake Woodworth
We design a general framework for answering adaptive statistical queries that focuses on providing explicit confidence intervals along with point estimates.
no code implementations • ICML 2018 • Seth Neel, Aaron Roth
Data that is gathered adaptively --- via bandit algorithms, for example --- exhibits bias.
no code implementations • NeurIPS 2018 • Matthew Joseph, Aaron Roth, Jonathan Ullman, Bo Waggoner
Moreover, existing techniques to mitigate this effect do not apply in the "local model" of differential privacy that these systems use.
no code implementations • NeurIPS 2018 • Stephen Gillen, Christopher Jung, Michael Kearns, Aaron Roth
We consider the problem of online learning in the linear contextual bandits setting, but in which there are also strong individual fairness constraints governed by an unknown similarity metric.
no code implementations • NeurIPS 2018 • Sampath Kannan, Jamie Morgenstern, Aaron Roth, Bo Waggoner, Zhiwei Steven Wu
Bandit learning is characterized by the tension between long-term exploration and short-term exploitation.
no code implementations • 22 Oct 2017 • Jinshuo Dong, Aaron Roth, Zachary Schutzman, Bo Waggoner, Zhiwei Steven Wu
We study an online linear classification problem, in which the data is generated by strategic agents who manipulate their features in an effort to change the classification outcome.
no code implementations • ICML 2017 • Shahin Jabbari, Matthew Joseph, Michael Kearns, Jamie Morgenstern, Aaron Roth
We initiate the study of fairness in reinforcement learning, where the actions of a learning algorithm may affect its environment and future rewards.
no code implementations • 29 Oct 2016 • Matthew Joseph, Michael Kearns, Jamie Morgenstern, Seth Neel, Aaron Roth
We study fairness in linear bandit problems.
no code implementations • 19 Jul 2016 • Aaron Roth, Aleksandrs Slivkins, Jonathan Ullman, Zhiwei Steven Wu
We are able to apply this technique to the setting of unit demand buyers despite the fact that in that setting the goods are not divisible, and the natural fractional relaxation of a unit demand valuation is not strongly concave.
1 code implementation • 7 Jun 2017 • Richard Berk, Hoda Heidari, Shahin Jabbari, Matthew Joseph, Michael Kearns, Jamie Morgenstern, Seth Neel, Aaron Roth
We introduce a flexible family of fairness regularizers for (linear and logistic) regression problems.
no code implementations • 27 Mar 2017 • Richard Berk, Hoda Heidari, Shahin Jabbari, Michael Kearns, Aaron Roth
Methods: We draw on the existing literatures in criminology, computer science and statistics to provide an integrated examination of fairness and accuracy in criminal justice risk assessments.
no code implementations • NeurIPS 2016 • Matthew Joseph, Michael Kearns, Jamie Morgenstern, Aaron Roth
This tight connection allows us to provide a provably fair algorithm for the linear contextual bandit problem with a polynomial dependence on the dimension, and to show (for a different class of functions) a worst-case exponential gap in regret between fair and non-fair learning algorithms
no code implementations • NeurIPS 2016 • Shahin Jabbari, Ryan Rogers, Aaron Roth, Zhiwei Steven Wu
This models the problem of predicting the behavior of a rational agent whose goals are known, but whose resources are unknown.
no code implementations • 13 Apr 2016 • Ryan Rogers, Aaron Roth, Adam Smith, Om Thakkar
In this paper, we initiate a principled study of how the generalization properties of approximate differential privacy can be used to perform adaptive hypothesis testing, while giving statistically valid $p$-value corrections.
no code implementations • 3 Nov 2015 • Justin Hsu, Jamie Morgenstern, Ryan Rogers, Aaron Roth, Rakesh Vohra
Second, we provide learning-theoretic results that show that such prices are robust to changing the buyers in the market, so long as all buyers are sampled from the same (unknown) distribution.
no code implementations • 24 Feb 2016 • Rachel Cummings, Katrina Ligett, Kobbi Nissim, Aaron Roth, Zhiwei Steven Wu
We also show that perfect generalization is a strictly stronger guarantee than differential privacy, but that, nevertheless, many learning tasks can be carried out subject to the guarantees of perfect generalization.
no code implementations • 10 Nov 2014 • Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toniann Pitassi, Omer Reingold, Aaron Roth
We show that, surprisingly, there is a way to estimate an exponential in $n$ number of expectations accurately even if the functions are chosen adaptively.
no code implementations • 6 Feb 2014 • Marco Gaboardi, Emilio Jesús Gallego Arias, Justin Hsu, Aaron Roth, Zhiwei Steven Wu
We present a practical, differentially private algorithm for answering a large number of queries on high dimensional datasets.
no code implementations • 4 Apr 2015 • Aaron Roth, Jonathan Ullman, Zhiwei Steven Wu
In this paper we present an approach to solving for the leader's optimal strategy in certain Stackelberg games where the follower's utility function (and thus the subsequent best response of the follower) is unknown.
no code implementations • 27 Jul 2014 • Kareem Amin, Rachel Cummings, Lili Dworkin, Michael Kearns, Aaron Roth
We consider the problem of learning from revealed preferences in an online setting.
no code implementations • 15 Feb 2014 • Justin Hsu, Aaron Roth, Tim Roughgarden, Jonathan Ullman
In this paper, we initiate the systematic study of solving linear programs under differential privacy.
no code implementations • 27 Aug 2018 • Sampath Kannan, Aaron Roth, Juba Ziani
We show that both goals can be achieved when the college does not report grades.
no code implementations • 30 Aug 2018 • Hadi Elzayn, Shahin Jabbari, Christopher Jung, Michael Kearns, Seth Neel, Aaron Roth, Zachary Schutzman
We formalize this fairness notion for allocation problems and investigate its algorithmic consequences.
no code implementations • 20 Oct 2018 • Alexandra Chouldechova, Aaron Roth
The last few years have seen an explosion of academic and popular interest in algorithmic fairness.
no code implementations • 19 Nov 2018 • Seth Neel, Aaron Roth, Zhiwei Steven Wu
We show that there is an efficient algorithm for privately constructing synthetic data for any such class, given a non-private learning oracle.
no code implementations • 6 Dec 2018 • Matthew Jagielski, Michael Kearns, Jieming Mao, Alina Oprea, Aaron Roth, Saeed Sharifi-Malvajerdi, Jonathan Ullman
This algorithm is appealingly simple, but must be able to use protected group membership explicitly at test time, which can be viewed as a form of 'disparate treatment'.
no code implementations • NeurIPS 2017 • Katrina Ligett, Seth Neel, Aaron Roth, Bo Waggoner, Steven Z. Wu
Traditional approaches to differential privacy assume a fixed privacy requirement ε for a computation, and attempt to maximize the accuracy of the computation subject to the privacy constraint.
no code implementations • ICML 2017 • Michael Kearns, Aaron Roth, Zhiwei Steven Wu
We consider the problem of selecting a strong pool of individuals from several populations with incomparable skills (e. g. soccer players, mathematicians, and singers) in a fair manner.
1 code implementation • NeurIPS 2019 • Yahav Bechavod, Katrina Ligett, Aaron Roth, Bo Waggoner, Zhiwei Steven Wu
We study an online classification problem with partial feedback in which individuals arrive one at a time from a fixed but unknown distribution, and must be classified as positive or negative.
no code implementations • 7 Apr 2019 • Matthew Joseph, Jieming Mao, Seth Neel, Aaron Roth
Next, we show that our reduction is tight by exhibiting a family of problems such that for any $k$, there is a fully interactive $k$-compositional protocol which solves the problem, while no sequentially interactive protocol can solve the problem without at least an $\tilde \Omega(k)$ factor more examples.
1 code implementation • NeurIPS 2019 • Michael Kearns, Aaron Roth, Saeed Sharifi-Malvajerdi
Given a sample of individuals and classification problems, we design an oracle-efficient algorithm (i. e. one that is given access to any standard, fairness-free learning heuristic) for the fair empirical risk minimization task.
no code implementations • 29 May 2019 • Hengchu Zhang, Edo Roth, Andreas Haeberlen, Benjamin C. Pierce, Aaron Roth
Curators of sensitive datasets sometimes need to know whether queries against the data are differentially private [Dwork et al. 2006].
Programming Languages Logic in Computer Science
no code implementations • 1 Jul 2019 • Matthew Joseph, Jieming Mao, Aaron Roth
We prove a general connection between the communication complexity of two-player games and the sample complexity of their multi-player locally private analogues.
no code implementations • 9 Sep 2019 • Christopher Jung, Katrina Ligett, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi, Moshe Shenfeld
This second claim follows from a thought experiment in which we imagine that the dataset is resampled from the posterior distribution after the mechanism has committed to its answers.
no code implementations • 12 Dec 2019 • Emily Diana, Michael Kearns, Seth Neel, Aaron Roth
We consider a fundamental dynamic allocation problem motivated by the problem of $\textit{securities lending}$ in financial markets, the mechanism underlying the short selling of stocks.
no code implementations • 16 Feb 2020 • Eshwar Ram Arunachaleswaran, Sampath Kannan, Aaron Roth, Juba Ziani
We consider two objectives: social welfare maximization, and a fairness-motivated maximin objective that seeks to maximize the value to the population (starting node) with the \emph{least} expected value.
no code implementations • 18 Feb 2020 • Christopher Jung, Sampath Kannan, Changhwa Lee, Mallesh M. Pai, Aaron Roth, Rakesh Vohra
There is increasing regulatory interest in whether machine learning algorithms deployed in consequential domains (e. g. in criminal justice) treat different demographic groups "fairly."
no code implementations • 26 Mar 2009 • Anupam Gupta, Katrina Ligett, Frank McSherry, Aaron Roth, Kunal Talwar
Is it even possible to design good algorithms for this problem that preserve the privacy of the clients?
Data Structures and Algorithms Cryptography and Security Computer Science and Game Theory
no code implementations • 18 Aug 2020 • Christopher Jung, Changhwa Lee, Mallesh M. Pai, Aaron Roth, Rakesh Vohra
We show how to achieve the notion of "multicalibration" from H\'ebert-Johnson et al. [2018] not just for means, but also for variances and other higher moments.
no code implementations • 5 Jan 2021 • Varun Gupta, Christopher Jung, Georgy Noarov, Mallesh M. Pai, Aaron Roth
We present a general, efficient technique for providing contextual predictions that are "multivalid" in various senses, against an online sequence of adversarially chosen examples $(x, y)$.
no code implementations • 16 Feb 2021 • Emily Diana, Wesley Gill, Ira Globus-Harris, Michael Kearns, Aaron Roth, Saeed Sharifi-Malvajerdi
We extend the notion of minimax fairness in supervised learning problems to its natural conclusion: lexicographic minimax fairness (or lexifairness for short).
no code implementations • 5 Apr 2021 • Jinshuo Dong, Aaron Roth, Weijie J. Su
In this rejoinder, we aim to address two broad issues that cover most comments made in the discussion.
no code implementations • 9 Jul 2021 • Emily Diana, Wesley Gill, Michael Kearns, Krishnaram Kenthapadi, Aaron Roth, Saeed Sharifi-Malvajerdi
The goal of the proxy is to allow a general "downstream" learner -- with minimal assumptions on their prediction task -- to be able to use the proxy to train a model that is fair with respect to the true sensitive features.
no code implementations • 9 Aug 2021 • Daniel Lee, Georgy Noarov, Mallesh Pai, Aaron Roth
We introduce a simple but general online learning framework in which a learner plays against an adversary in a vector-valued game that changes every round.
no code implementations • 25 Jan 2022 • Ira Globus-Harris, Michael Kearns, Aaron Roth
We propose and analyze an algorithmic framework for "bias bounties": events in which external participants are invited to propose improvements to a trained model, akin to bug bounty events in software and security.
no code implementations • CVPR 2022 • Aditya Golatkar, Alessandro Achille, Yu-Xiang Wang, Aaron Roth, Michael Kearns, Stefano Soatto
AdaMix incorporates few-shot training, or cross-modal zero-shot learning, on public data prior to private fine-tuning, to improve the trade-off.
no code implementations • 9 Jun 2022 • Yahav Bechavod, Aaron Roth
We consider an online learning problem with one-sided feedback, in which the learner is able to observe the true label only for positively predicted instances.
no code implementations • 4 Sep 2022 • Aaron Roth, Alexander Tolbert, Scott Weinstein
Individual probabilities refer to the probabilities of outcomes that are realized only once: the probability that it will rain tomorrow, the probability that Alice will die within the next 12 months, the probability that Bob will be arrested for a violent crime in the next 18 months, etc.
no code implementations • 15 Sep 2022 • Giuseppe Vietri, Cedric Archambeau, Sergul Aydore, William Brown, Michael Kearns, Aaron Roth, Ankit Siva, Shuai Tang, Zhiwei Steven Wu
A key innovation in our algorithm is the ability to directly handle numerical features, in contrast to a number of related prior approaches which require numerical features to be first converted into {high cardinality} categorical features via {a binning strategy}.
no code implementations • 15 Sep 2022 • Ira Globus-Harris, Varun Gupta, Christopher Jung, Michael Kearns, Jamie Morgenstern, Aaron Roth
We show how to take a regression function $\hat{f}$ that is appropriately ``multicalibrated'' and efficiently post-process it into an approximately error minimizing classifier satisfying a large variety of fairness constraints.
no code implementations • 16 Feb 2023 • Georgy Noarov, Aaron Roth
To further counter-weigh our negative result, we show that if a property $\Gamma^1$ is not elicitable by itself, but is elicitable conditionally on another elicitable property $\Gamma^0$, then there is a canonical algorithm that jointly multicalibrates $\Gamma^1$ and $\Gamma^0$; this generalizes past work on mean-moment multicalibration.
no code implementations • 26 Jun 2023 • Siqi Deng, Emily Diana, Michael Kearns, Aaron Roth
Importantly, we require that the proxy classification itself not reveal significant information about the sensitive group membership of any individual sample (i. e., it should be sufficiently non-disclosive).
no code implementations • 18 Jul 2023 • Sumegha Garg, Christopher Jung, Omer Reingold, Aaron Roth
We develop a new online multicalibration algorithm that is well defined for infinite benchmark classes $F$, and is oracle efficient (i. e. for any class $F$, the algorithm has the form of an efficient reduction to a no-regret learning algorithm for $F$).
no code implementations • 7 Oct 2023 • Krishna Acharya, Eshwar Ram Arunachaleswaran, Sampath Kannan, Aaron Roth, Juba Ziani
Our approach gives similar regret guarantees compared to [Blum & Lykouris]; however, we run in time linear in the number of groups, and are oracle-efficient in the hypothesis class.
no code implementations • 26 Oct 2023 • Georgy Noarov, Ramya Ramalingam, Aaron Roth, Stephan Xie
We study the problem of making predictions of an adversarially chosen high-dimensional state that are unbiased subject to an arbitrary collection of conditioning events, with the goal of tailoring these events to downstream decision makers.
no code implementations • 8 Dec 2023 • Shuai Tang, Zhiwei Steven Wu, Sergul Aydore, Michael Kearns, Aaron Roth
Our proposed MI attack learns quantile regression models that predict (a quantile of) the distribution of reconstruction loss on examples not used in training.
no code implementations • 13 Feb 2024 • Aaron Roth, Mirah Shi
In the low dimensional setting, we show how to make predictions such that all agents who best respond to our predictions have diminishing swap regret -- in 1 dimension, at the optimal $O(\sqrt{T})$ rate.
no code implementations • 16 Feb 2024 • Ira Globus-Harris, Declan Harrison, Michael Kearns, Pietro Perona, Aaron Roth
There, unlike in classical crowdsourced ML, participants deliberately specialize their efforts by working on subproblems, such as demographic subgroups in the service of fairness.
no code implementations • 18 Feb 2024 • Eshwar Ram Arunachaleswaran, Natalie Collina, Aaron Roth, Mirah Shi
Blasiok et al. [2023] proposed distance to calibration as a natural measure of calibration error that unlike expected calibration error (ECE) is continuous.
no code implementations • 27 Feb 2024 • Natalie Collina, Varun Gupta, Aaron Roth
First, we show that this game admits a pure-strategy \emph{non-responsive} equilibrium amongst the Agents -- informally an equilibrium in which the Agent's actions depend on the history of realized states of nature, but not on the history of each other's actions, and so avoids the complexities of collusion and threats.
no code implementations • 6 Apr 2024 • Gianluca Detommaso, Martin Bertran, Riccardo Fogliato, Aaron Roth
This paper proposes the use of "multicalibration" to yield interpretable and reliable confidence scores for outputs generated by large language models (LLMs).