no code implementations • 15 Aug 2022 • Kareem Amin, Matthew Joseph, Mónica Ribero, Sergei Vassilvitskii
In this paper, we study an algorithm which uses the exponential mechanism to select a model with high Tukey depth from a collection of non-private regression models.
no code implementations • ICLR 2022 • Albert Cheu, Matthew Joseph, Jieming Mao, Binghui Peng
In shuffle privacy, each user sends a collection of randomized messages to a trusted shuffler, the shuffler randomly permutes these messages, and the resulting shuffled collection of messages must satisfy differential privacy.
no code implementations • 16 Feb 2021 • Jennifer Gillenwater, Matthew Joseph, Alex Kulesza
Quantiles are often used for summarizing and understanding data.
no code implementations • 20 Apr 2020 • Victor Balcer, Albert Cheu, Matthew Joseph, Jieming Mao
First, we give robustly shuffle private protocols and upper bounds for counting distinct elements and uniformity testing.
no code implementations • 4 Nov 2019 • Kareem Amin, Matthew Joseph, Jieming Mao
We show that the sample complexity of pure pan-private uniformity testing is $\Theta(k^{2/3})$.
no code implementations • 1 Jul 2019 • Matthew Joseph, Jieming Mao, Aaron Roth
We prove a general connection between the communication complexity of two-player games and the sample complexity of their multi-player locally private analogues.
no code implementations • 7 Apr 2019 • Matthew Joseph, Jieming Mao, Seth Neel, Aaron Roth
Next, we show that our reduction is tight by exhibiting a family of problems such that for any $k$, there is a fully interactive $k$-compositional protocol which solves the problem, while no sequentially interactive protocol can solve the problem without at least an $\tilde \Omega(k)$ factor more examples.
no code implementations • NeurIPS 2019 • Matthew Joseph, Janardhan Kulkarni, Jieming Mao, Zhiwei Steven Wu
We study a basic private estimation problem: each of $n$ users draws a single i. i. d.
no code implementations • NeurIPS 2018 • Matthew Joseph, Aaron Roth, Jonathan Ullman, Bo Waggoner
Moreover, existing techniques to mitigate this effect do not apply in the "local model" of differential privacy that these systems use.
1 code implementation • 7 Jun 2017 • Richard Berk, Hoda Heidari, Shahin Jabbari, Matthew Joseph, Michael Kearns, Jamie Morgenstern, Seth Neel, Aaron Roth
We introduce a flexible family of fairness regularizers for (linear and logistic) regression problems.
no code implementations • ICML 2017 • Shahin Jabbari, Matthew Joseph, Michael Kearns, Jamie Morgenstern, Aaron Roth
We initiate the study of fairness in reinforcement learning, where the actions of a learning algorithm may affect its environment and future rewards.
no code implementations • 29 Oct 2016 • Matthew Joseph, Michael Kearns, Jamie Morgenstern, Seth Neel, Aaron Roth
We study fairness in linear bandit problems.
no code implementations • NeurIPS 2016 • Matthew Joseph, Michael Kearns, Jamie Morgenstern, Aaron Roth
This tight connection allows us to provide a provably fair algorithm for the linear contextual bandit problem with a polynomial dependence on the dimension, and to show (for a different class of functions) a worst-case exponential gap in regret between fair and non-fair learning algorithms