Search Results for author: Jieming Mao

Found 18 papers, 0 papers with code

Learning across Data Owners with Joint Differential Privacy

no code implementations25 May 2023 Yangsibo Huang, Haotian Jiang, Daogao Liu, Mohammad Mahdian, Jieming Mao, Vahab Mirrokni

In this paper, we study the setting in which data owners train machine learning models collaboratively under a privacy notion called joint differential privacy [Kearns et al., 2018].

Multi-class Classification

Shuffle Private Stochastic Convex Optimization

no code implementations ICLR 2022 Albert Cheu, Matthew Joseph, Jieming Mao, Binghui Peng

In shuffle privacy, each user sends a collection of randomized messages to a trusted shuffler, the shuffler randomly permutes these messages, and the resulting shuffled collection of messages must satisfy differential privacy.

Smoothly Bounding User Contributions in Differential Privacy

no code implementations NeurIPS 2020 Alessandro Epasto, Mohammad Mahdian, Jieming Mao, Vahab Mirrokni, Lijie Ren

But at the same time, more noise might need to be added to the algorithm in order to keep the algorithm differentially private and this might hurt the algorithm’s performance.

Connecting Robust Shuffle Privacy and Pan-Privacy

no code implementations20 Apr 2020 Victor Balcer, Albert Cheu, Matthew Joseph, Jieming Mao

First, we give robustly shuffle private protocols and upper bounds for counting distinct elements and uniformity testing.

Pan-Private Uniformity Testing

no code implementations4 Nov 2019 Kareem Amin, Matthew Joseph, Jieming Mao

We show that the sample complexity of pure pan-private uniformity testing is $\Theta(k^{2/3})$.

Exponential Separations in Local Differential Privacy

no code implementations1 Jul 2019 Matthew Joseph, Jieming Mao, Aaron Roth

We prove a general connection between the communication complexity of two-player games and the sample complexity of their multi-player locally private analogues.

Sorted Top-k in Rounds

no code implementations12 Jun 2019 Mark Braverman, Jieming Mao, Yuval Peres

When the comparisons are noiseless, we characterize how the optimal sample complexity depends on the number of rounds (up to a polylogarithmic factor for general $r$ and up to a constant factor for $r=1$ or 2).

The Role of Interactivity in Local Differential Privacy

no code implementations7 Apr 2019 Matthew Joseph, Jieming Mao, Seth Neel, Aaron Roth

Next, we show that our reduction is tight by exhibiting a family of problems such that for any $k$, there is a fully interactive $k$-compositional protocol which solves the problem, while no sequentially interactive protocol can solve the problem without at least an $\tilde \Omega(k)$ factor more examples.

Two-sample testing

Bayesian Exploration with Heterogeneous Agents

no code implementations19 Feb 2019 Nicole Immorlica, Jieming Mao, Aleksandrs Slivkins, Zhiwei Steven Wu

We consider Bayesian Exploration: a simple model in which the recommendation system (the "principal") controls the information flow to the users (the "agents") and strives to incentivize exploration via information asymmetry.

Recommendation Systems

Differentially Private Fair Learning

no code implementations6 Dec 2018 Matthew Jagielski, Michael Kearns, Jieming Mao, Alina Oprea, Aaron Roth, Saeed Sharifi-Malvajerdi, Jonathan Ullman

This algorithm is appealingly simple, but must be able to use protected group membership explicitly at test time, which can be viewed as a form of 'disparate treatment'.

Attribute Fairness

Contextual Pricing for Lipschitz Buyers

no code implementations NeurIPS 2018 Jieming Mao, Renato Leme, Jon Schneider

For the symmetric loss $\ell(f(x_t), y_t) = \vert f(x_t) - y_t \vert$, we provide an algorithm for this problem achieving total loss $O(\log T)$ when $d=1$ and $O(T^{(d-1)/d})$ when $d>1$, and show that both bounds are tight (up to a factor of $\sqrt{\log T}$).

Locally Private Gaussian Estimation

no code implementations NeurIPS 2019 Matthew Joseph, Janardhan Kulkarni, Jieming Mao, Zhiwei Steven Wu

We study a basic private estimation problem: each of $n$ users draws a single i. i. d.

Selling to a No-Regret Buyer

no code implementations25 Nov 2017 Mark Braverman, Jieming Mao, Jon Schneider, S. Matthew Weinberg

- There exists a learning algorithm $\mathcal{A}$ such that if the buyer bids according to $\mathcal{A}$ then the optimal strategy for the seller is simply to post the Myerson reserve for $D$ every round.

A Nearly Instance Optimal Algorithm for Top-k Ranking under the Multinomial Logit Model

no code implementations25 Jul 2017 Xi Chen, Yuanzhi Li, Jieming Mao

We study the active learning problem of top-$k$ ranking from multi-wise comparisons under the popular multinomial logit model.

Active Learning

Multi-armed Bandit Problems with Strategic Arms

no code implementations27 Jun 2017 Mark Braverman, Jieming Mao, Jon Schneider, S. Matthew Weinberg

We study a strategic version of the multi-armed bandit problem, where each arm is an individual strategic agent and we, the principal, pull one arm each round.

Competitive analysis of the top-K ranking problem

no code implementations12 May 2016 Xi Chen, Sivakanth Gopi, Jieming Mao, Jon Schneider

In particular, we present a linear time algorithm for the top-$K$ problem which has a competitive ratio of $\tilde{O}(\sqrt{n})$; i. e. to solve any instance of top-$K$, our algorithm needs at most $\tilde{O}(\sqrt{n})$ times as many samples needed as the best possible algorithm for that instance (in contrast, all previous known algorithms for the top-$K$ problem have competitive ratios of $\tilde{\Omega}(n)$ or worse).

Recommendation Systems

Cannot find the paper you are looking for? You can Submit a new open access paper.