Search Results for author: Badih Ghazi

Found 18 papers, 2 papers with code

User-Level Differentially Private Learning via Correlated Sampling

no code implementations NeurIPS 2021 Badih Ghazi, Ravi Kumar, Pasin Manurangsi

Most works in learning with differential privacy (DP) have focused on the setting where each user has a single sample.

User-Level Private Learning via Correlated Sampling

no code implementations21 Oct 2021 Badih Ghazi, Ravi Kumar, Pasin Manurangsi

Most works in learning with differential privacy (DP) have focused on the setting where each user has a single sample.

Large-Scale Differentially Private BERT

no code implementations3 Aug 2021 Rohan Anil, Badih Ghazi, Vineet Gupta, Ravi Kumar, Pasin Manurangsi

In this work, we study the large-scale pretraining of BERT-Large with differentially private SGD (DP-SGD).

Language Modelling

Locally Private k-Means in One Round

no code implementations20 Apr 2021 Alisa Chang, Badih Ghazi, Ravi Kumar, Pasin Manurangsi

We provide an approximation algorithm for k-means clustering in the one-round (aka non-interactive) local model of differential privacy (DP).

Deep Learning with Label Differential Privacy

no code implementations NeurIPS 2021 Badih Ghazi, Noah Golowich, Ravi Kumar, Pasin Manurangsi, Chiyuan Zhang

The Randomized Response (RR) algorithm is a classical technique to improve robustness in survey aggregation, and has been widely adopted in applications with differential privacy guarantees.

Multi-class Classification

On Avoiding the Union Bound When Answering Multiple Differentially Private Queries

no code implementations16 Dec 2020 Badih Ghazi, Ravi Kumar, Pasin Manurangsi

On the other hand, the algorithm of Dagan and Kur has a remarkable advantage that the $\ell_{\infty}$ error bound of $O(\frac{1}{\epsilon}\sqrt{k \log \frac{1}{\delta}})$ holds not only in expectation but always (i. e., with probability one) while we can only get a high probability (or expected) guarantee on the error.

Sample-efficient proper PAC learning with approximate differential privacy

no code implementations7 Dec 2020 Badih Ghazi, Noah Golowich, Ravi Kumar, Pasin Manurangsi

In this paper we prove that the sample complexity of properly learning a class of Littlestone dimension $d$ with approximate differential privacy is $\tilde O(d^6)$, ignoring privacy and accuracy parameters.

Robust and Private Learning of Halfspaces

no code implementations30 Nov 2020 Badih Ghazi, Ravi Kumar, Pasin Manurangsi, Thao Nguyen

In this work, we study the trade-off between differential privacy and adversarial robustness under L2-perturbations in the context of learning halfspaces.

Adversarial Robustness

On Distributed Differential Privacy and Counting Distinct Elements

no code implementations21 Sep 2020 Lijie Chen, Badih Ghazi, Ravi Kumar, Pasin Manurangsi

We study the setup where each of $n$ users holds an element from a discrete set, and the goal is to count the number of distinct elements across all users, under the constraint of $(\epsilon, \delta)$-differentially privacy: - In the non-interactive local setting, we prove that the additive error of any protocol is $\Omega(n)$ for any constant $\epsilon$ and for any $\delta$ inverse polynomial in $n$.

Differentially Private Clustering: Tight Approximation Ratios

no code implementations NeurIPS 2020 Badih Ghazi, Ravi Kumar, Pasin Manurangsi

For several basic clustering problems, including Euclidean DensestBall, 1-Cluster, k-means, and k-median, we give efficient differentially private algorithms that achieve essentially the same approximation ratios as those that can be obtained by any non-private algorithm, while incurring only small additive errors.

Near-tight closure bounds for Littlestone and threshold dimensions

no code implementations7 Jul 2020 Badih Ghazi, Noah Golowich, Ravi Kumar, Pasin Manurangsi

We study closure properties for the Littlestone and threshold dimensions of binary hypothesis classes.

Private Aggregation from Fewer Anonymous Messages

no code implementations24 Sep 2019 Badih Ghazi, Pasin Manurangsi, Rasmus Pagh, Ameya Velingker

Using a reduction of Balle et al. (2019), our improved analysis of the protocol of Ishai et al. yields, in the same model, an $\left(\varepsilon, \delta\right)$-differentially private protocol for aggregation that, for any constant $\varepsilon > 0$ and any $\delta = \frac{1}{\mathrm{poly}(n)}$, incurs only a constant error and requires only a constant number of messages per party.

Cryptography and Security Data Structures and Algorithms

On the Power of Multiple Anonymous Messages

no code implementations29 Aug 2019 Badih Ghazi, Noah Golowich, Ravi Kumar, Rasmus Pagh, Ameya Velingker

- Protocols in the multi-message shuffled model with $poly(\log{B}, \log{n})$ bits of communication per user and $poly\log{B}$ error, which provide an exponential improvement on the error compared to what is possible with single-message algorithms.

Scalable and Differentially Private Distributed Aggregation in the Shuffled Model

no code implementations19 Jun 2019 Badih Ghazi, Rasmus Pagh, Ameya Velingker

Federated learning promises to make machine learning feasible on distributed, private datasets by implementing gradient descent using secure aggregation methods.

Federated Learning

Recursive Sketches for Modular Deep Learning

no code implementations29 May 2019 Badih Ghazi, Rina Panigrahy, Joshua R. Wang

The sketch summarizes essential information about the inputs and outputs of the network and can be used to quickly identify key components and summary statistics of the inputs.

On the Power of Learning from $k$-Wise Queries

no code implementations28 Feb 2017 Vitaly Feldman, Badih Ghazi

Hence it is natural to ask whether algorithms using $k$-wise queries can solve learning problems more efficiently and by how much.

The Optimality of Correlated Sampling

1 code implementation4 Dec 2016 Mohammad Bavarian, Badih Ghazi, Elad Haramaty, Pritish Kamath, Ronald L. Rivest, Madhu Sudan

In this note, we give a surprisingly simple proof that this protocol is in fact tight.

Computational Complexity Information Theory Information Theory

Cannot find the paper you are looking for? You can Submit a new open access paper.