Search Results for author: Kunal Talwar

Found 35 papers, 10 papers with code

Private Adaptive Gradient Methods for Convex Optimization

no code implementations25 Jun 2021 Hilal Asi, John Duchi, Alireza Fallah, Omid Javidbakht, Kunal Talwar

We study adaptive methods for differentially private convex optimization, proposing and analyzing differentially private variants of a Stochastic Gradient Descent (SGD) algorithm with adaptive stepsizes, as well as the AdaGrad algorithm.

Private Stochastic Convex Optimization: Optimal Rates in $\ell_1$ Geometry

no code implementations2 Mar 2021 Hilal Asi, Vitaly Feldman, Tomer Koren, Kunal Talwar

Stochastic convex optimization over an $\ell_1$-bounded domain is ubiquitous in machine learning applications such as LASSO but remains poorly understood when learning with differential privacy.

Lossless Compression of Efficient Private Local Randomizers

no code implementations24 Feb 2021 Vitaly Feldman, Kunal Talwar

Here we demonstrate a general approach that, under standard cryptographic assumptions, compresses every efficient LDP algorithm with negligible loss in privacy and utility guarantees.

Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy Amplification by Shuffling

1 code implementation23 Dec 2020 Vitaly Feldman, Audra McMillan, Kunal Talwar

As a direct corollary of our analysis we derive a simple and nearly optimal algorithm for frequency estimation in the shuffle model of privacy.

When is Memorization of Irrelevant Training Data Necessary for High-Accuracy Learning?

1 code implementation11 Dec 2020 Gavin Brown, Mark Bun, Vitaly Feldman, Adam Smith, Kunal Talwar

Our problems are simple and fairly natural variants of the next-symbol prediction and the cluster labeling tasks.

On the Error Resistance of Hinge Loss Minimization

no code implementations2 Dec 2020 Kunal Talwar

Commonly used classification algorithms in machine learning, such as support vector machines, minimize a convex surrogate loss on training examples.

On the Error Resistance of Hinge-Loss Minimization

no code implementations NeurIPS 2020 Kunal Talwar

Commonly used classification algorithms in machine learning, such as support vector machines, minimize a convex surrogate loss on training examples.

Faster Differentially Private Samplers via Rényi Divergence Analysis of Discretized Langevin MCMC

no code implementations NeurIPS 2020 Arun Ganesh, Kunal Talwar

Various differentially private algorithms instantiate the exponential mechanism, and require sampling from the distribution $\exp(-f)$ for a suitable function $f$.

Stochastic Optimization with Laggard Data Pipelines

no code implementations NeurIPS 2020 Naman Agarwal, Rohan Anil, Tomer Koren, Kunal Talwar, Cyril Zhang

State-of-the-art optimization is steadily shifting towards massively parallel pipelines with extremely large batch sizes.

Stochastic Optimization

Private Stochastic Convex Optimization: Optimal Rates in Linear Time

no code implementations10 May 2020 Vitaly Feldman, Tomer Koren, Kunal Talwar

We also give a linear-time algorithm achieving the optimal bound on the excess loss for the strongly convex case, as well as a faster algorithm for the non-smooth case.

Characterizing Structural Regularities of Labeled Data in Overparameterized Models

no code implementations8 Feb 2020 Ziheng Jiang, Chiyuan Zhang, Kunal Talwar, Michael C. Mozer

We obtain empirical estimates of this score for individual instances in multiple data sets, and we show that the score identifies out-of-distribution and mislabeled examples at one end of the continuum and strongly regular examples at the other end.

Curriculum Learning Density Estimation +2

Computational Separations between Sampling and Optimization

no code implementations NeurIPS 2019 Kunal Talwar

Two commonly arising computational tasks in Bayesian learning are Optimization (Maximum A Posteriori estimation) and Sampling (from the posterior distribution).

Rényi Differential Privacy of the Sampled Gaussian Mechanism

2 code implementations28 Aug 2019 Ilya Mironov, Kunal Talwar, Li Zhang

The Sampled Gaussian Mechanism (SGM)---a composition of subsampling and the additive Gaussian noise---has been successfully used in a number of machine learning applications.

Private Stochastic Convex Optimization with Optimal Rates

no code implementations NeurIPS 2019 Raef Bassily, Vitaly Feldman, Kunal Talwar, Abhradeep Thakurta

A long line of existing work on private convex optimization focuses on the empirical loss and derives asymptotically tight bounds on the excess empirical loss.

Semi-Cyclic Stochastic Gradient Descent

no code implementations23 Apr 2019 Hubert Eichner, Tomer Koren, H. Brendan McMahan, Nathan Srebro, Kunal Talwar

We consider convex SGD updates with a block-cyclic structure, i. e. where each cycle consists of a small number of blocks, each with many samples from a possibly different, block-specific, distribution.

Federated Learning

Better Algorithms for Stochastic Bandits with Adversarial Corruptions

no code implementations22 Feb 2019 Anupam Gupta, Tomer Koren, Kunal Talwar

We study the stochastic multi-armed bandits problem in the presence of adversarial corruption.

Multi-Armed Bandits

Amplification by Shuffling: From Local to Central Differential Privacy via Anonymity

no code implementations29 Nov 2018 Úlfar Erlingsson, Vitaly Feldman, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, Abhradeep Thakurta

We study the collection of such statistics in the local differential privacy (LDP) model, and describe an algorithm whose privacy cost is polylogarithmic in the number of changes to a user's value.

Private Selection from Private Candidates

no code implementations19 Nov 2018 Jingcheng Liu, Kunal Talwar

In this work, we consider the selection problem under a much weaker stability assumption on the candidates, namely that the score functions are differentially private.

Hyperparameter Optimization

Privacy Amplification by Iteration

no code implementations20 Aug 2018 Vitaly Feldman, Ilya Mironov, Kunal Talwar, Abhradeep Thakurta

In addition, we demonstrate that we can achieve guarantees similar to those obtainable using the privacy-amplification-by-sampling technique in several natural settings where that technique cannot be applied.

Online Linear Quadratic Control

no code implementations ICML 2018 Alon Cohen, Avinatan Hassidim, Tomer Koren, Nevena Lazic, Yishay Mansour, Kunal Talwar

We study the problem of controlling linear time-invariant systems with known noisy dynamics and adversarially chosen quadratic losses.

Adversarially Robust Generalization Requires More Data

no code implementations NeurIPS 2018 Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, Aleksander Mądry

We postulate that the difficulty of training robust classifiers stems, at least partially, from this inherently larger sample complexity.

General Classification Image Classification

Online learning over a finite action set with limited switching

no code implementations5 Mar 2018 Jason Altschuler, Kunal Talwar

Using the above result and several reductions, we unify previous work and completely characterize the complexity of this switching budget setting up to small polylogarithmic factors: for both PFE and MAB, for all switching budgets $S \leq T$, and for both expectation and h. p.

Multi-Armed Bandits

Learning Representations for Faster Similarity Search

no code implementations ICLR 2018 Ludwig Schmidt, Kunal Talwar

Based on our experiments, we propose a number of training modifications that lead to significantly better datasets for nearest neighbor algorithms.

General Classification

Learning Differentially Private Recurrent Language Models

no code implementations ICLR 2018 H. Brendan McMahan, Daniel Ramage, Kunal Talwar, Li Zhang

We demonstrate that it is possible to train large recurrent language models with user-level differential privacy guarantees with only a negligible cost in predictive accuracy.

On the Protection of Private Information in Machine Learning Systems: Two Recent Approaches

no code implementations26 Aug 2017 Martín Abadi, Úlfar Erlingsson, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Nicolas Papernot, Kunal Talwar, Li Zhang

The recent, remarkable growth of machine learning has led to intense interest in the privacy of the data on which machine learning relies, and to new techniques for preserving privacy.

Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data

7 code implementations18 Oct 2016 Nicolas Papernot, Martín Abadi, Úlfar Erlingsson, Ian Goodfellow, Kunal Talwar

The approach combines, in a black-box fashion, multiple models trained with disjoint datasets, such as records from different subsets of users.

Transfer Learning

Deep Learning with Differential Privacy

17 code implementations1 Jul 2016 Martín Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, Li Zhang

Machine learning techniques based on neural networks are achieving remarkable results in a wide variety of domains.

Sketching and Neural Networks

no code implementations19 Apr 2016 Amit Daniely, Nevena Lazic, Yoram Singer, Kunal Talwar

In stark contrast, our approach of using improper learning, using a larger hypothesis class allows the sketch size to have a logarithmic dependence on the degree.

Nearly Optimal Private LASSO

no code implementations NeurIPS 2015 Kunal Talwar, Abhradeep Guha Thakurta, Li Zhang

In addition, we show that this error bound is nearly optimal amongst all differentially private algorithms.

Private Empirical Risk Minimization Beyond the Worst Case: The Effect of the Constraint Set Geometry

1 code implementation20 Nov 2014 Kunal Talwar, Abhradeep Thakurta, Li Zhang

In addition, we show that when the loss function is Lipschitz with respect to the $\ell_1$ norm and $\mathcal{C}$ is $\ell_1$-bounded, a differentially private version of the Frank-Wolfe algorithm gives error bounds of the form $\tilde{O}(n^{-2/3})$.

Analyze Gauss: Optimal Bounds for Privacy-Preserving Principal Component Analysis

1 code implementation1 May 2014 Cynthia Dwork, Kunal Talwar, Abhradeep Thakurta, Li Zhang

We show that the well-known, but misnamed, randomized response algorithm, with properly tuned parameters, provides a nearly optimal additive quality gap compared to the best possible singular subspace of A.

Differentially Private Combinatorial Optimization

1 code implementation26 Mar 2009 Anupam Gupta, Katrina Ligett, Frank McSherry, Aaron Roth, Kunal Talwar

Is it even possible to design good algorithms for this problem that preserve the privacy of the clients?

Data Structures and Algorithms Cryptography and Security Computer Science and Game Theory

Cannot find the paper you are looking for? You can Submit a new open access paper.