no code implementations • 25 May 2023 • Yangsibo Huang, Haotian Jiang, Daogao Liu, Mohammad Mahdian, Jieming Mao, Vahab Mirrokni
In this paper, we study the setting in which data owners train machine learning models collaboratively under a privacy notion called joint differential privacy [Kearns et al., 2018].
no code implementations • 19 Jul 2022 • Mohammad Mahdian, Jieming Mao, Kangning Wang
In our model, the task is to pick the highest one out of $n$ values.
no code implementations • ICLR 2022 • Albert Cheu, Matthew Joseph, Jieming Mao, Binghui Peng
In shuffle privacy, each user sends a collection of randomized messages to a trusted shuffler, the shuffler randomly permutes these messages, and the resulting shuffled collection of messages must satisfy differential privacy.
no code implementations • NeurIPS 2020 • Alessandro Epasto, Mohammad Mahdian, Jieming Mao, Vahab Mirrokni, Lijie Ren
But at the same time, more noise might need to be added to the algorithm in order to keep the algorithm differentially private and this might hurt the algorithm’s performance.
no code implementations • 20 Apr 2020 • Victor Balcer, Albert Cheu, Matthew Joseph, Jieming Mao
First, we give robustly shuffle private protocols and upper bounds for counting distinct elements and uniformity testing.
no code implementations • 4 Nov 2019 • Kareem Amin, Matthew Joseph, Jieming Mao
We show that the sample complexity of pure pan-private uniformity testing is $\Theta(k^{2/3})$.
no code implementations • 1 Jul 2019 • Matthew Joseph, Jieming Mao, Aaron Roth
We prove a general connection between the communication complexity of two-player games and the sample complexity of their multi-player locally private analogues.
no code implementations • 12 Jun 2019 • Mark Braverman, Jieming Mao, Yuval Peres
When the comparisons are noiseless, we characterize how the optimal sample complexity depends on the number of rounds (up to a polylogarithmic factor for general $r$ and up to a constant factor for $r=1$ or 2).
no code implementations • 7 Apr 2019 • Matthew Joseph, Jieming Mao, Seth Neel, Aaron Roth
Next, we show that our reduction is tight by exhibiting a family of problems such that for any $k$, there is a fully interactive $k$-compositional protocol which solves the problem, while no sequentially interactive protocol can solve the problem without at least an $\tilde \Omega(k)$ factor more examples.
no code implementations • 19 Feb 2019 • Nicole Immorlica, Jieming Mao, Aleksandrs Slivkins, Zhiwei Steven Wu
We consider Bayesian Exploration: a simple model in which the recommendation system (the "principal") controls the information flow to the users (the "agents") and strives to incentivize exploration via information asymmetry.
no code implementations • 6 Dec 2018 • Matthew Jagielski, Michael Kearns, Jieming Mao, Alina Oprea, Aaron Roth, Saeed Sharifi-Malvajerdi, Jonathan Ullman
This algorithm is appealingly simple, but must be able to use protected group membership explicitly at test time, which can be viewed as a form of 'disparate treatment'.
no code implementations • NeurIPS 2018 • Jieming Mao, Renato Leme, Jon Schneider
For the symmetric loss $\ell(f(x_t), y_t) = \vert f(x_t) - y_t \vert$, we provide an algorithm for this problem achieving total loss $O(\log T)$ when $d=1$ and $O(T^{(d-1)/d})$ when $d>1$, and show that both bounds are tight (up to a factor of $\sqrt{\log T}$).
no code implementations • NeurIPS 2019 • Matthew Joseph, Janardhan Kulkarni, Jieming Mao, Zhiwei Steven Wu
We study a basic private estimation problem: each of $n$ users draws a single i. i. d.
no code implementations • 14 Nov 2018 • Nicole Immorlica, Jieming Mao, Aleksandrs Slivkins, Zhiwei Steven Wu
We propose and design recommendation systems that incentivize efficient exploration.
no code implementations • 25 Nov 2017 • Mark Braverman, Jieming Mao, Jon Schneider, S. Matthew Weinberg
- There exists a learning algorithm $\mathcal{A}$ such that if the buyer bids according to $\mathcal{A}$ then the optimal strategy for the seller is simply to post the Myerson reserve for $D$ every round.
no code implementations • 25 Jul 2017 • Xi Chen, Yuanzhi Li, Jieming Mao
We study the active learning problem of top-$k$ ranking from multi-wise comparisons under the popular multinomial logit model.
no code implementations • 27 Jun 2017 • Mark Braverman, Jieming Mao, Jon Schneider, S. Matthew Weinberg
We study a strategic version of the multi-armed bandit problem, where each arm is an individual strategic agent and we, the principal, pull one arm each round.
no code implementations • 12 May 2016 • Xi Chen, Sivakanth Gopi, Jieming Mao, Jon Schneider
In particular, we present a linear time algorithm for the top-$K$ problem which has a competitive ratio of $\tilde{O}(\sqrt{n})$; i. e. to solve any instance of top-$K$, our algorithm needs at most $\tilde{O}(\sqrt{n})$ times as many samples needed as the best possible algorithm for that instance (in contrast, all previous known algorithms for the top-$K$ problem have competitive ratios of $\tilde{\Omega}(n)$ or worse).