Search Results for author: Kunal Talwar

Found 51 papers, 13 papers with code

Differentially Private Combinatorial Optimization

no code implementations26 Mar 2009 Anupam Gupta, Katrina Ligett, Frank McSherry, Aaron Roth, Kunal Talwar

Is it even possible to design good algorithms for this problem that preserve the privacy of the clients?

Data Structures and Algorithms Cryptography and Security Computer Science and Game Theory

Analyze Gauss: Optimal Bounds for Privacy-Preserving Principal Component Analysis

1 code implementation1 May 2014 Cynthia Dwork, Kunal Talwar, Abhradeep Thakurta, Li Zhang

We show that the well-known, but misnamed, randomized response algorithm, with properly tuned parameters, provides a nearly optimal additive quality gap compared to the best possible singular subspace of A.

Attribute Privacy Preserving

Private Empirical Risk Minimization Beyond the Worst Case: The Effect of the Constraint Set Geometry

1 code implementation20 Nov 2014 Kunal Talwar, Abhradeep Thakurta, Li Zhang

In addition, we show that when the loss function is Lipschitz with respect to the $\ell_1$ norm and $\mathcal{C}$ is $\ell_1$-bounded, a differentially private version of the Frank-Wolfe algorithm gives error bounds of the form $\tilde{O}(n^{-2/3})$.

Nearly Optimal Private LASSO

no code implementations NeurIPS 2015 Kunal Talwar, Abhradeep Guha Thakurta, Li Zhang

In addition, we show that this error bound is nearly optimal amongst all differentially private algorithms.

Sketching and Neural Networks

no code implementations19 Apr 2016 Amit Daniely, Nevena Lazic, Yoram Singer, Kunal Talwar

In stark contrast, our approach of using improper learning, using a larger hypothesis class allows the sketch size to have a logarithmic dependence on the degree.

Deep Learning with Differential Privacy

25 code implementations1 Jul 2016 Martín Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, Li Zhang

Machine learning techniques based on neural networks are achieving remarkable results in a wide variety of domains.

BIG-bench Machine Learning

Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data

8 code implementations18 Oct 2016 Nicolas Papernot, Martín Abadi, Úlfar Erlingsson, Ian Goodfellow, Kunal Talwar

The approach combines, in a black-box fashion, multiple models trained with disjoint datasets, such as records from different subsets of users.

Transfer Learning

On the Protection of Private Information in Machine Learning Systems: Two Recent Approaches

no code implementations26 Aug 2017 Martín Abadi, Úlfar Erlingsson, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Nicolas Papernot, Kunal Talwar, Li Zhang

The recent, remarkable growth of machine learning has led to intense interest in the privacy of the data on which machine learning relies, and to new techniques for preserving privacy.

BIG-bench Machine Learning valid

Learning Differentially Private Recurrent Language Models

1 code implementation ICLR 2018 H. Brendan McMahan, Daniel Ramage, Kunal Talwar, Li Zhang

We demonstrate that it is possible to train large recurrent language models with user-level differential privacy guarantees with only a negligible cost in predictive accuracy.

Learning Representations for Faster Similarity Search

no code implementations ICLR 2018 Ludwig Schmidt, Kunal Talwar

Based on our experiments, we propose a number of training modifications that lead to significantly better datasets for nearest neighbor algorithms.

General Classification

Online learning over a finite action set with limited switching

no code implementations5 Mar 2018 Jason Altschuler, Kunal Talwar

Using the above result and several reductions, we unify previous work and completely characterize the complexity of this switching budget setting up to small polylogarithmic factors: for both PFE and MAB, for all switching budgets $S \leq T$, and for both expectation and h. p.

Multi-Armed Bandits

Adversarially Robust Generalization Requires More Data

no code implementations NeurIPS 2018 Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, Aleksander Mądry

We postulate that the difficulty of training robust classifiers stems, at least partially, from this inherently larger sample complexity.

General Classification Image Classification

Online Linear Quadratic Control

no code implementations ICML 2018 Alon Cohen, Avinatan Hassidim, Tomer Koren, Nevena Lazic, Yishay Mansour, Kunal Talwar

We study the problem of controlling linear time-invariant systems with known noisy dynamics and adversarially chosen quadratic losses.

Privacy Amplification by Iteration

no code implementations20 Aug 2018 Vitaly Feldman, Ilya Mironov, Kunal Talwar, Abhradeep Thakurta

In addition, we demonstrate that we can achieve guarantees similar to those obtainable using the privacy-amplification-by-sampling technique in several natural settings where that technique cannot be applied.

Private Selection from Private Candidates

no code implementations19 Nov 2018 Jingcheng Liu, Kunal Talwar

In this work, we consider the selection problem under a much weaker stability assumption on the candidates, namely that the score functions are differentially private.

Computational Efficiency Hyperparameter Optimization

Amplification by Shuffling: From Local to Central Differential Privacy via Anonymity

no code implementations29 Nov 2018 Úlfar Erlingsson, Vitaly Feldman, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, Abhradeep Thakurta

We study the collection of such statistics in the local differential privacy (LDP) model, and describe an algorithm whose privacy cost is polylogarithmic in the number of changes to a user's value.

Better Algorithms for Stochastic Bandits with Adversarial Corruptions

no code implementations22 Feb 2019 Anupam Gupta, Tomer Koren, Kunal Talwar

We study the stochastic multi-armed bandits problem in the presence of adversarial corruption.

Multi-Armed Bandits

Semi-Cyclic Stochastic Gradient Descent

no code implementations23 Apr 2019 Hubert Eichner, Tomer Koren, H. Brendan McMahan, Nathan Srebro, Kunal Talwar

We consider convex SGD updates with a block-cyclic structure, i. e. where each cycle consists of a small number of blocks, each with many samples from a possibly different, block-specific, distribution.

Federated Learning

Private Stochastic Convex Optimization with Optimal Rates

no code implementations NeurIPS 2019 Raef Bassily, Vitaly Feldman, Kunal Talwar, Abhradeep Thakurta

A long line of existing work on private convex optimization focuses on the empirical loss and derives asymptotically tight bounds on the excess empirical loss.

Rényi Differential Privacy of the Sampled Gaussian Mechanism

2 code implementations28 Aug 2019 Ilya Mironov, Kunal Talwar, Li Zhang

The Sampled Gaussian Mechanism (SGM)---a composition of subsampling and the additive Gaussian noise---has been successfully used in a number of machine learning applications.

Computational Separations between Sampling and Optimization

no code implementations NeurIPS 2019 Kunal Talwar

Two commonly arising computational tasks in Bayesian learning are Optimization (Maximum A Posteriori estimation) and Sampling (from the posterior distribution).

Characterizing Structural Regularities of Labeled Data in Overparameterized Models

1 code implementation8 Feb 2020 Ziheng Jiang, Chiyuan Zhang, Kunal Talwar, Michael C. Mozer

We obtain empirical estimates of this score for individual instances in multiple data sets, and we show that the score identifies out-of-distribution and mislabeled examples at one end of the continuum and strongly regular examples at the other end.

Density Estimation Out-of-Distribution Detection +1

Private Stochastic Convex Optimization: Optimal Rates in Linear Time

no code implementations10 May 2020 Vitaly Feldman, Tomer Koren, Kunal Talwar

We also give a linear-time algorithm achieving the optimal bound on the excess loss for the strongly convex case, as well as a faster algorithm for the non-smooth case.

Stochastic Optimization with Laggard Data Pipelines

no code implementations NeurIPS 2020 Naman Agarwal, Rohan Anil, Tomer Koren, Kunal Talwar, Cyril Zhang

State-of-the-art optimization is steadily shifting towards massively parallel pipelines with extremely large batch sizes.

Stochastic Optimization

Faster Differentially Private Samplers via Rényi Divergence Analysis of Discretized Langevin MCMC

no code implementations NeurIPS 2020 Arun Ganesh, Kunal Talwar

Various differentially private algorithms instantiate the exponential mechanism, and require sampling from the distribution $\exp(-f)$ for a suitable function $f$.

On the Error Resistance of Hinge-Loss Minimization

no code implementations NeurIPS 2020 Kunal Talwar

Commonly used classification algorithms in machine learning, such as support vector machines, minimize a convex surrogate loss on training examples.

On the Error Resistance of Hinge Loss Minimization

no code implementations2 Dec 2020 Kunal Talwar

Commonly used classification algorithms in machine learning, such as support vector machines, minimize a convex surrogate loss on training examples.

When is Memorization of Irrelevant Training Data Necessary for High-Accuracy Learning?

1 code implementation11 Dec 2020 Gavin Brown, Mark Bun, Vitaly Feldman, Adam Smith, Kunal Talwar

Our problems are simple and fairly natural variants of the next-symbol prediction and the cluster labeling tasks.

Memorization

Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy Amplification by Shuffling

1 code implementation23 Dec 2020 Vitaly Feldman, Audra McMillan, Kunal Talwar

As a direct corollary of our analysis we derive a simple and nearly optimal algorithm for frequency estimation in the shuffle model of privacy.

Lossless Compression of Efficient Private Local Randomizers

no code implementations24 Feb 2021 Vitaly Feldman, Kunal Talwar

Here we demonstrate a general approach that, under standard cryptographic assumptions, compresses every efficient LDP algorithm with negligible loss in privacy and utility guarantees.

Private Stochastic Convex Optimization: Optimal Rates in $\ell_1$ Geometry

no code implementations2 Mar 2021 Hilal Asi, Vitaly Feldman, Tomer Koren, Kunal Talwar

Stochastic convex optimization over an $\ell_1$-bounded domain is ubiquitous in machine learning applications such as LASSO but remains poorly understood when learning with differential privacy.

Private Adaptive Gradient Methods for Convex Optimization

no code implementations25 Jun 2021 Hilal Asi, John Duchi, Alireza Fallah, Omid Javidbakht, Kunal Talwar

We study adaptive methods for differentially private convex optimization, proposing and analyzing differentially private variants of a Stochastic Gradient Descent (SGD) algorithm with adaptive stepsizes, as well as the AdaGrad algorithm.

Differential Secrecy for Distributed Data and Applications to Robust Differentially Secure Vector Summation

no code implementations22 Feb 2022 Kunal Talwar

In private federated learning applications, these vectors are held by client devices, leading to a distributed summation problem.

Federated Learning

Private Frequency Estimation via Projective Geometry

1 code implementation1 Mar 2022 Vitaly Feldman, Jelani Nelson, Huy Lê Nguyen, Kunal Talwar

In many parameter settings used in practice this is a significant improvement over the $ O(n+k^2)$ computation cost that is achieved by the recent PI-RAPPOR algorithm (Feldman and Talwar; 2021).

Optimal Algorithms for Mean Estimation under Local Differential Privacy

no code implementations5 May 2022 Hilal Asi, Vitaly Feldman, Kunal Talwar

We show that PrivUnit (Bhowmick et al. 2018) with optimized parameters achieves the optimal variance among a large family of locally private randomizers.

Privacy of Noisy Stochastic Gradient Descent: More Iterations without More Privacy Loss

no code implementations27 May 2022 Jason M. Altschuler, Kunal Talwar

A central issue in machine learning is how to train models on sensitive user data.

FLAIR: Federated Learning Annotated Image Repository

1 code implementation18 Jul 2022 Congzheng Song, Filip Granqvist, Kunal Talwar

We believe FLAIR can serve as a challenging benchmark for advancing the state-of-the art in federated learning.

Federated Learning Multi-Label Classification

Private Online Prediction from Experts: Separations and Faster Rates

no code implementations24 Oct 2022 Hilal Asi, Vitaly Feldman, Tomer Koren, Kunal Talwar

Our lower bounds also show a separation between pure and approximate differential privacy for adaptive adversaries where the latter is necessary to achieve the non-private $O(\sqrt{T})$ regret.

Subspace Recovery from Heterogeneous Data with Non-isotropic Noise

no code implementations24 Oct 2022 John Duchi, Vitaly Feldman, Lunjia Hu, Kunal Talwar

Our goal is to recover the linear subspace shared by $\mu_1,\ldots,\mu_n$ using the data points from all users, where every data point from user $i$ is formed by adding an independent mean-zero noise vector to $\mu_i$.

Federated Learning

Concentration of the Langevin Algorithm's Stationary Distribution

no code implementations24 Dec 2022 Jason M. Altschuler, Kunal Talwar

This discretization leads the Langevin Algorithm to have a stationary distribution $\pi_{\eta}$ which differs from the stationary distribution $\pi$ of the Langevin Diffusion, and it is an important challenge to understand whether the well-known properties of $\pi$ extend to $\pi_{\eta}$.

Near-Optimal Algorithms for Private Online Optimization in the Realizable Regime

no code implementations27 Feb 2023 Hilal Asi, Vitaly Feldman, Tomer Koren, Kunal Talwar

We also develop an adaptive algorithm for the small-loss setting with regret $O(L^\star\log d + \varepsilon^{-1} \log^{1. 5}{d})$ where $L^\star$ is the total loss of the best expert.

Differentially Private Heavy Hitter Detection using Federated Analytics

no code implementations21 Jul 2023 Karan Chadha, Junye Chen, John Duchi, Vitaly Feldman, Hanieh Hashemi, Omid Javidbakht, Audra McMillan, Kunal Talwar

In this work, we study practical heuristics to improve the performance of prefix-tree based algorithms for differentially private heavy hitter detection.

Mean Estimation with User-level Privacy under Data Heterogeneity

no code implementations28 Jul 2023 Rachel Cummings, Vitaly Feldman, Audra McMillan, Kunal Talwar

In this work we propose a simple model of heterogeneous user data that allows user data to differ in both distribution and quantity of data, and provide a method for estimating the population-level mean while preserving user-level differential privacy.

Private Vector Mean Estimation in the Shuffle Model: Optimal Rates Require Many Messages

no code implementations16 Apr 2024 Hilal Asi, Vitaly Feldman, Jelani Nelson, Huy L. Nguyen, Samson Zhou, Kunal Talwar

We study the problem of private vector mean estimation in the shuffle model of privacy where $n$ users each have a unit vector $v^{(i)} \in\mathbb{R}^d$.

Cannot find the paper you are looking for? You can Submit a new open access paper.