You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 25 Jun 2021 • Hilal Asi, John Duchi, Alireza Fallah, Omid Javidbakht, Kunal Talwar

We study adaptive methods for differentially private convex optimization, proposing and analyzing differentially private variants of a Stochastic Gradient Descent (SGD) algorithm with adaptive stepsizes, as well as the AdaGrad algorithm.

no code implementations • 2 Mar 2021 • Hilal Asi, Vitaly Feldman, Tomer Koren, Kunal Talwar

Stochastic convex optimization over an $\ell_1$-bounded domain is ubiquitous in machine learning applications such as LASSO but remains poorly understood when learning with differential privacy.

no code implementations • 24 Feb 2021 • Vitaly Feldman, Kunal Talwar

Here we demonstrate a general approach that, under standard cryptographic assumptions, compresses every efficient LDP algorithm with negligible loss in privacy and utility guarantees.

1 code implementation • 23 Dec 2020 • Vitaly Feldman, Audra McMillan, Kunal Talwar

As a direct corollary of our analysis we derive a simple and nearly optimal algorithm for frequency estimation in the shuffle model of privacy.

1 code implementation • 11 Dec 2020 • Gavin Brown, Mark Bun, Vitaly Feldman, Adam Smith, Kunal Talwar

Our problems are simple and fairly natural variants of the next-symbol prediction and the cluster labeling tasks.

no code implementations • 2 Dec 2020 • Kunal Talwar

Commonly used classification algorithms in machine learning, such as support vector machines, minimize a convex surrogate loss on training examples.

no code implementations • NeurIPS 2020 • Kunal Talwar

Commonly used classification algorithms in machine learning, such as support vector machines, minimize a convex surrogate loss on training examples.

no code implementations • NeurIPS 2020 • Arun Ganesh, Kunal Talwar

Various differentially private algorithms instantiate the exponential mechanism, and require sampling from the distribution $\exp(-f)$ for a suitable function $f$.

no code implementations • NeurIPS 2020 • Naman Agarwal, Rohan Anil, Tomer Koren, Kunal Talwar, Cyril Zhang

State-of-the-art optimization is steadily shifting towards massively parallel pipelines with extremely large batch sizes.

no code implementations • NeurIPS 2020 • Raef Bassily, Vitaly Feldman, Cristóbal Guzmán, Kunal Talwar

Our work is the first to address uniform stability of SGD on {\em nonsmooth} convex losses.

no code implementations • 10 May 2020 • Vitaly Feldman, Tomer Koren, Kunal Talwar

We also give a linear-time algorithm achieving the optimal bound on the excess loss for the strongly convex case, as well as a faster algorithm for the non-smooth case.

no code implementations • 8 Feb 2020 • Ziheng Jiang, Chiyuan Zhang, Kunal Talwar, Michael C. Mozer

We obtain empirical estimates of this score for individual instances in multiple data sets, and we show that the score identifies out-of-distribution and mislabeled examples at one end of the continuum and strongly regular examples at the other end.

no code implementations • NeurIPS 2019 • Kunal Talwar

Two commonly arising computational tasks in Bayesian learning are Optimization (Maximum A Posteriori estimation) and Sampling (from the posterior distribution).

2 code implementations • 28 Aug 2019 • Ilya Mironov, Kunal Talwar, Li Zhang

The Sampled Gaussian Mechanism (SGM)---a composition of subsampling and the additive Gaussian noise---has been successfully used in a number of machine learning applications.

no code implementations • NeurIPS 2019 • Raef Bassily, Vitaly Feldman, Kunal Talwar, Abhradeep Thakurta

A long line of existing work on private convex optimization focuses on the empirical loss and derives asymptotically tight bounds on the excess empirical loss.

no code implementations • 23 Apr 2019 • Hubert Eichner, Tomer Koren, H. Brendan McMahan, Nathan Srebro, Kunal Talwar

We consider convex SGD updates with a block-cyclic structure, i. e. where each cycle consists of a small number of blocks, each with many samples from a possibly different, block-specific, distribution.

no code implementations • 22 Feb 2019 • Anupam Gupta, Tomer Koren, Kunal Talwar

We study the stochastic multi-armed bandits problem in the presence of adversarial corruption.

no code implementations • 29 Nov 2018 • Úlfar Erlingsson, Vitaly Feldman, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, Abhradeep Thakurta

We study the collection of such statistics in the local differential privacy (LDP) model, and describe an algorithm whose privacy cost is polylogarithmic in the number of changes to a user's value.

no code implementations • 19 Nov 2018 • Jingcheng Liu, Kunal Talwar

In this work, we consider the selection problem under a much weaker stability assumption on the candidates, namely that the score functions are differentially private.

no code implementations • 20 Aug 2018 • Vitaly Feldman, Ilya Mironov, Kunal Talwar, Abhradeep Thakurta

In addition, we demonstrate that we can achieve guarantees similar to those obtainable using the privacy-amplification-by-sampling technique in several natural settings where that technique cannot be applied.

no code implementations • ICML 2018 • Alon Cohen, Avinatan Hassidim, Tomer Koren, Nevena Lazic, Yishay Mansour, Kunal Talwar

We study the problem of controlling linear time-invariant systems with known noisy dynamics and adversarially chosen quadratic losses.

no code implementations • NeurIPS 2018 • Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, Aleksander Mądry

We postulate that the difficulty of training robust classifiers stems, at least partially, from this inherently larger sample complexity.

no code implementations • 5 Mar 2018 • Jason Altschuler, Kunal Talwar

Using the above result and several reductions, we unify previous work and completely characterize the complexity of this switching budget setting up to small polylogarithmic factors: for both PFE and MAB, for all switching budgets $S \leq T$, and for both expectation and h. p.

2 code implementations • ICLR 2018 • Nicolas Papernot, Shuang Song, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, Úlfar Erlingsson

Models and examples built with TensorFlow

no code implementations • ICLR 2018 • Ludwig Schmidt, Kunal Talwar

Based on our experiments, we propose a number of training modifications that lead to significantly better datasets for nearest neighbor algorithms.

no code implementations • ICLR 2018 • H. Brendan McMahan, Daniel Ramage, Kunal Talwar, Li Zhang

We demonstrate that it is possible to train large recurrent language models with user-level differential privacy guarantees with only a negligible cost in predictive accuracy.

no code implementations • 26 Aug 2017 • Martín Abadi, Úlfar Erlingsson, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Nicolas Papernot, Kunal Talwar, Li Zhang

The recent, remarkable growth of machine learning has led to intense interest in the privacy of the data on which machine learning relies, and to new techniques for preserving privacy.

7 code implementations • 18 Oct 2016 • Nicolas Papernot, Martín Abadi, Úlfar Erlingsson, Ian Goodfellow, Kunal Talwar

The approach combines, in a black-box fashion, multiple models trained with disjoint datasets, such as records from different subsets of users.

17 code implementations • 1 Jul 2016 • Martín Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, Li Zhang

Machine learning techniques based on neural networks are achieving remarkable results in a wide variety of domains.

no code implementations • 19 Apr 2016 • Amit Daniely, Nevena Lazic, Yoram Singer, Kunal Talwar

In stark contrast, our approach of using improper learning, using a larger hypothesis class allows the sketch size to have a logarithmic dependence on the degree.

4 code implementations • 14 Mar 2016 • Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, Xiaoqiang Zheng

TensorFlow is an interface for expressing machine learning algorithms, and an implementation for executing such algorithms.

no code implementations • NeurIPS 2015 • Kunal Talwar, Abhradeep Guha Thakurta, Li Zhang

In addition, we show that this error bound is nearly optimal amongst all differentially private algorithms.

1 code implementation • 20 Nov 2014 • Kunal Talwar, Abhradeep Thakurta, Li Zhang

In addition, we show that when the loss function is Lipschitz with respect to the $\ell_1$ norm and $\mathcal{C}$ is $\ell_1$-bounded, a differentially private version of the Frank-Wolfe algorithm gives error bounds of the form $\tilde{O}(n^{-2/3})$.

1 code implementation • 1 May 2014 • Cynthia Dwork, Kunal Talwar, Abhradeep Thakurta, Li Zhang

We show that the well-known, but misnamed, randomized response algorithm, with properly tuned parameters, provides a nearly optimal additive quality gap compared to the best possible singular subspace of A.

1 code implementation • 26 Mar 2009 • Anupam Gupta, Katrina Ligett, Frank McSherry, Aaron Roth, Kunal Talwar

Is it even possible to design good algorithms for this problem that preserve the privacy of the clients?

Data Structures and Algorithms Cryptography and Security Computer Science and Game Theory

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.