no code implementations • 5 Oct 2023 • Nestor Maslej, Loredana Fattorini, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Helen Ngo, Juan Carlos Niebles, Vanessa Parli, Yoav Shoham, Russell Wald, Jack Clark, Raymond Perrault
Welcome to the sixth edition of the AI Index Report.
no code implementations • 15 Jan 2023 • Jon X. Eguia, Nicole Immorlica, Steven P. Lalley, Katrina Ligett, Glen Weyl, Dimitrios Xefteris
Consider the following collective choice problem: a group of budget constrained agents must choose one of several alternatives.
no code implementations • NeurIPS 2023 • Moshe Shenfeld, Katrina Ligett
Repeated use of a data sample via adaptively chosen queries can rapidly lead to overfitting, wherein the empirical evaluation of queries on the sample significantly deviates from their mean with respect to the underlying data distribution.
no code implementations • 17 Feb 2020 • Yahav Bechavod, Katrina Ligett, Zhiwei Steven Wu, Juba Ziani
We consider an online regression setting in which individuals adapt to the regression model: arriving individuals are aware of the current model, and invest strategically in modifying their own features so as to improve the predicted score that the current model assigns to them.
no code implementations • 13 Feb 2020 • Vikas K. Garg, Adam Kalai, Katrina Ligett, Zhiwei Steven Wu
Domain generalization is the problem of machine learning when the training data and the test data come from different data domains.
no code implementations • 22 Nov 2019 • Haim Kaplan, Katrina Ligett, Yishay Mansour, Moni Naor, Uri Stemmer
This problem has received much attention recently; unlike the non-private case, where the sample complexity is independent of the domain size and just depends on the desired accuracy and confidence, for private learning the sample complexity must depend on the domain size $X$ (even for approximate differential privacy).
no code implementations • 9 Sep 2019 • Christopher Jung, Katrina Ligett, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi, Moshe Shenfeld
This second claim follows from a thought experiment in which we imagine that the dataset is resampled from the posterior distribution after the mechanism has committed to its answers.
no code implementations • NeurIPS 2019 • Katrina Ligett, Moshe Shenfeld
We introduce a new notion of the stability of computations, which holds under post-processing and adaptive composition.
no code implementations • 26 Apr 2019 • Daniel Alabi, Adam Tauman Kalai, Katrina Ligett, Cameron Musco, Christos Tzamos, Ellen Vitercik
We present an algorithm that learns to maximally prune the search space on repeated computations, thereby reducing runtime while provably outputting the correct solution each period with high probability.
1 code implementation • NeurIPS 2019 • Yahav Bechavod, Katrina Ligett, Aaron Roth, Bo Waggoner, Zhiwei Steven Wu
We study an online classification problem with partial feedback in which individuals arrive one at a time from a fixed but unknown distribution, and must be classified as positive or negative.
no code implementations • NeurIPS 2017 • Katrina Ligett, Seth Neel, Aaron Roth, Bo Waggoner, Steven Z. Wu
Traditional approaches to differential privacy assume a fixed privacy requirement ε for a computation, and attempt to maximize the accuracy of the computation subject to the privacy constraint.
2 code implementations • 30 Jun 2017 • Yahav Bechavod, Katrina Ligett
We present a new approach for mitigating unfairness in learned classifiers.
1 code implementation • 30 May 2017 • Katrina Ligett, Seth Neel, Aaron Roth, Bo Waggoner, Z. Steven Wu
Traditional approaches to differential privacy assume a fixed privacy requirement $\epsilon$ for a computation, and attempt to maximize the accuracy of the computation subject to the privacy constraint.
no code implementations • 24 Feb 2016 • Rachel Cummings, Katrina Ligett, Kobbi Nissim, Aaron Roth, Zhiwei Steven Wu
We also show that perfect generalization is a strictly stronger guarantee than differential privacy, but that, nevertheless, many learning tasks can be carried out subject to the guarantees of perfect generalization.
no code implementations • 10 Jun 2015 • Rachel Cummings, Stratis Ioannidis, Katrina Ligett
We consider the problem of fitting a linear model to data held by individuals who are concerned about their privacy.
no code implementations • NeurIPS 2012 • Moritz Hardt, Katrina Ligett, Frank McSherry
We present a new algorithm for differentially private data release, based on a simple combination of the Exponential Mechanism with the Multiplicative Weights update rule.
no code implementations • 26 Mar 2009 • Anupam Gupta, Katrina Ligett, Frank McSherry, Aaron Roth, Kunal Talwar
Is it even possible to design good algorithms for this problem that preserve the privacy of the clients?
Data Structures and Algorithms Cryptography and Security Computer Science and Game Theory