no code implementations • 16 Apr 2024 • Hilal Asi, Vitaly Feldman, Jelani Nelson, Huy L. Nguyen, Kunal Talwar, Samson Zhou
We study the problem of private vector mean estimation in the shuffle model of privacy where $n$ users each have a unit vector $v^{(i)} \in\mathbb{R}^d$.
no code implementations • 29 Sep 2023 • Martin Pelikan, Sheikh Shams Azam, Vitaly Feldman, Jan "Honza" Silovsky, Kunal Talwar, Tatiana Likhomanenko
($4. 5$, $10^{-9}$)-$\textbf{DP}$) with a 1. 3% (resp.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
no code implementations • 28 Jul 2023 • Rachel Cummings, Vitaly Feldman, Audra McMillan, Kunal Talwar
In this work we propose a simple model of heterogeneous user data that allows user data to differ in both distribution and quantity of data, and provide a method for estimating the population-level mean while preserving user-level differential privacy.
no code implementations • 27 Jul 2023 • Kunal Talwar, Shan Wang, Audra McMillan, Vojta Jina, Vitaly Feldman, Bailey Basile, Aine Cahill, Yi Sheng Chan, Mike Chatzidakis, Junye Chen, Oliver Chick, Mona Chitnis, Suman Ganta, Yusuf Goren, Filip Granqvist, Kristine Guo, Frederic Jacobs, Omid Javidbakht, Albert Liu, Richard Low, Dan Mascenik, Steve Myers, David Park, Wonhee Park, Gianni Parsa, Tommy Pauly, Christian Priebe, Rehan Rishi, Guy Rothblum, Michael Scaria, Linmao Song, Congzheng Song, Karl Tarbe, Sebastian Vogt, Luke Winstrom, Shundong Zhou
We revisit the problem of designing scalable protocols for private statistics and private federated learning when each device holds its private data.
no code implementations • 21 Jul 2023 • Karan Chadha, Junye Chen, John Duchi, Vitaly Feldman, Hanieh Hashemi, Omid Javidbakht, Audra McMillan, Kunal Talwar
In this work, we study practical heuristics to improve the performance of prefix-tree based algorithms for differentially private heavy hitter detection.
no code implementations • 27 Feb 2023 • Hilal Asi, Vitaly Feldman, Tomer Koren, Kunal Talwar
We also develop an adaptive algorithm for the small-loss setting with regret $O(L^\star\log d + \varepsilon^{-1} \log^{1. 5}{d})$ where $L^\star$ is the total loss of the best expert.
no code implementations • 24 Dec 2022 • Jason M. Altschuler, Kunal Talwar
This discretization leads the Langevin Algorithm to have a stationary distribution $\pi_{\eta}$ which differs from the stationary distribution $\pi$ of the Langevin Diffusion, and it is an important challenge to understand whether the well-known properties of $\pi$ extend to $\pi_{\eta}$.
no code implementations • 24 Oct 2022 • John Duchi, Vitaly Feldman, Lunjia Hu, Kunal Talwar
Our goal is to recover the linear subspace shared by $\mu_1,\ldots,\mu_n$ using the data points from all users, where every data point from user $i$ is formed by adding an independent mean-zero noise vector to $\mu_i$.
no code implementations • 24 Oct 2022 • Hilal Asi, Vitaly Feldman, Tomer Koren, Kunal Talwar
Our lower bounds also show a separation between pure and approximate differential privacy for adaptive adversaries where the latter is necessary to achieve the non-private $O(\sqrt{T})$ regret.
no code implementations • 16 Oct 2022 • Jason M. Altschuler, Kunal Talwar
In this way, we disentangle the study of the mixing and bias of the Langevin Algorithm.
no code implementations • 9 Aug 2022 • Vitaly Feldman, Audra McMillan, Kunal Talwar
Our second contribution is a new analysis of privacy amplification by shuffling.
1 code implementation • 18 Jul 2022 • Congzheng Song, Filip Granqvist, Kunal Talwar
We believe FLAIR can serve as a challenging benchmark for advancing the state-of-the art in federated learning.
no code implementations • 27 May 2022 • Jason M. Altschuler, Kunal Talwar
A central issue in machine learning is how to train models on sensitive user data.
no code implementations • 5 May 2022 • Hilal Asi, Vitaly Feldman, Kunal Talwar
We show that PrivUnit (Bhowmick et al. 2018) with optimized parameters achieves the optimal variance among a large family of locally private randomizers.
1 code implementation • 1 Mar 2022 • Vitaly Feldman, Jelani Nelson, Huy Lê Nguyen, Kunal Talwar
In many parameter settings used in practice this is a significant improvement over the $ O(n+k^2)$ computation cost that is achieved by the recent PI-RAPPOR algorithm (Feldman and Talwar; 2021).
no code implementations • 22 Feb 2022 • Kunal Talwar
In private federated learning applications, these vectors are held by client devices, leading to a distributed summation problem.
no code implementations • 25 Jun 2021 • Hilal Asi, John Duchi, Alireza Fallah, Omid Javidbakht, Kunal Talwar
We study adaptive methods for differentially private convex optimization, proposing and analyzing differentially private variants of a Stochastic Gradient Descent (SGD) algorithm with adaptive stepsizes, as well as the AdaGrad algorithm.
no code implementations • 2 Mar 2021 • Hilal Asi, Vitaly Feldman, Tomer Koren, Kunal Talwar
Stochastic convex optimization over an $\ell_1$-bounded domain is ubiquitous in machine learning applications such as LASSO but remains poorly understood when learning with differential privacy.
no code implementations • 24 Feb 2021 • Vitaly Feldman, Kunal Talwar
Here we demonstrate a general approach that, under standard cryptographic assumptions, compresses every efficient LDP algorithm with negligible loss in privacy and utility guarantees.
1 code implementation • 23 Dec 2020 • Vitaly Feldman, Audra McMillan, Kunal Talwar
As a direct corollary of our analysis we derive a simple and nearly optimal algorithm for frequency estimation in the shuffle model of privacy.
1 code implementation • 11 Dec 2020 • Gavin Brown, Mark Bun, Vitaly Feldman, Adam Smith, Kunal Talwar
Our problems are simple and fairly natural variants of the next-symbol prediction and the cluster labeling tasks.
no code implementations • 2 Dec 2020 • Kunal Talwar
Commonly used classification algorithms in machine learning, such as support vector machines, minimize a convex surrogate loss on training examples.
no code implementations • NeurIPS 2020 • Kunal Talwar
Commonly used classification algorithms in machine learning, such as support vector machines, minimize a convex surrogate loss on training examples.
no code implementations • NeurIPS 2020 • Arun Ganesh, Kunal Talwar
Various differentially private algorithms instantiate the exponential mechanism, and require sampling from the distribution $\exp(-f)$ for a suitable function $f$.
no code implementations • NeurIPS 2020 • Naman Agarwal, Rohan Anil, Tomer Koren, Kunal Talwar, Cyril Zhang
State-of-the-art optimization is steadily shifting towards massively parallel pipelines with extremely large batch sizes.
no code implementations • NeurIPS 2020 • Raef Bassily, Vitaly Feldman, Cristóbal Guzmán, Kunal Talwar
Our work is the first to address uniform stability of SGD on {\em nonsmooth} convex losses.
no code implementations • 10 May 2020 • Vitaly Feldman, Tomer Koren, Kunal Talwar
We also give a linear-time algorithm achieving the optimal bound on the excess loss for the strongly convex case, as well as a faster algorithm for the non-smooth case.
1 code implementation • 8 Feb 2020 • Ziheng Jiang, Chiyuan Zhang, Kunal Talwar, Michael C. Mozer
We obtain empirical estimates of this score for individual instances in multiple data sets, and we show that the score identifies out-of-distribution and mislabeled examples at one end of the continuum and strongly regular examples at the other end.
no code implementations • NeurIPS 2019 • Kunal Talwar
Two commonly arising computational tasks in Bayesian learning are Optimization (Maximum A Posteriori estimation) and Sampling (from the posterior distribution).
2 code implementations • 28 Aug 2019 • Ilya Mironov, Kunal Talwar, Li Zhang
The Sampled Gaussian Mechanism (SGM)---a composition of subsampling and the additive Gaussian noise---has been successfully used in a number of machine learning applications.
no code implementations • NeurIPS 2019 • Raef Bassily, Vitaly Feldman, Kunal Talwar, Abhradeep Thakurta
A long line of existing work on private convex optimization focuses on the empirical loss and derives asymptotically tight bounds on the excess empirical loss.
no code implementations • 23 Apr 2019 • Hubert Eichner, Tomer Koren, H. Brendan McMahan, Nathan Srebro, Kunal Talwar
We consider convex SGD updates with a block-cyclic structure, i. e. where each cycle consists of a small number of blocks, each with many samples from a possibly different, block-specific, distribution.
no code implementations • 22 Feb 2019 • Anupam Gupta, Tomer Koren, Kunal Talwar
We study the stochastic multi-armed bandits problem in the presence of adversarial corruption.
no code implementations • 29 Nov 2018 • Úlfar Erlingsson, Vitaly Feldman, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, Abhradeep Thakurta
We study the collection of such statistics in the local differential privacy (LDP) model, and describe an algorithm whose privacy cost is polylogarithmic in the number of changes to a user's value.
no code implementations • 19 Nov 2018 • Jingcheng Liu, Kunal Talwar
In this work, we consider the selection problem under a much weaker stability assumption on the candidates, namely that the score functions are differentially private.
no code implementations • 20 Aug 2018 • Vitaly Feldman, Ilya Mironov, Kunal Talwar, Abhradeep Thakurta
In addition, we demonstrate that we can achieve guarantees similar to those obtainable using the privacy-amplification-by-sampling technique in several natural settings where that technique cannot be applied.
no code implementations • ICML 2018 • Alon Cohen, Avinatan Hassidim, Tomer Koren, Nevena Lazic, Yishay Mansour, Kunal Talwar
We study the problem of controlling linear time-invariant systems with known noisy dynamics and adversarially chosen quadratic losses.
no code implementations • NeurIPS 2018 • Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, Aleksander Mądry
We postulate that the difficulty of training robust classifiers stems, at least partially, from this inherently larger sample complexity.
no code implementations • 5 Mar 2018 • Jason Altschuler, Kunal Talwar
Using the above result and several reductions, we unify previous work and completely characterize the complexity of this switching budget setting up to small polylogarithmic factors: for both PFE and MAB, for all switching budgets $S \leq T$, and for both expectation and h. p.
3 code implementations • ICLR 2018 • Nicolas Papernot, Shuang Song, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, Úlfar Erlingsson
Models and examples built with TensorFlow
no code implementations • ICLR 2018 • Ludwig Schmidt, Kunal Talwar
Based on our experiments, we propose a number of training modifications that lead to significantly better datasets for nearest neighbor algorithms.
1 code implementation • ICLR 2018 • H. Brendan McMahan, Daniel Ramage, Kunal Talwar, Li Zhang
We demonstrate that it is possible to train large recurrent language models with user-level differential privacy guarantees with only a negligible cost in predictive accuracy.
no code implementations • 26 Aug 2017 • Martín Abadi, Úlfar Erlingsson, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Nicolas Papernot, Kunal Talwar, Li Zhang
The recent, remarkable growth of machine learning has led to intense interest in the privacy of the data on which machine learning relies, and to new techniques for preserving privacy.
8 code implementations • 18 Oct 2016 • Nicolas Papernot, Martín Abadi, Úlfar Erlingsson, Ian Goodfellow, Kunal Talwar
The approach combines, in a black-box fashion, multiple models trained with disjoint datasets, such as records from different subsets of users.
25 code implementations • 1 Jul 2016 • Martín Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, Li Zhang
Machine learning techniques based on neural networks are achieving remarkable results in a wide variety of domains.
no code implementations • 19 Apr 2016 • Amit Daniely, Nevena Lazic, Yoram Singer, Kunal Talwar
In stark contrast, our approach of using improper learning, using a larger hypothesis class allows the sketch size to have a logarithmic dependence on the degree.
4 code implementations • 14 Mar 2016 • Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, Xiaoqiang Zheng
TensorFlow is an interface for expressing machine learning algorithms, and an implementation for executing such algorithms.
no code implementations • NeurIPS 2015 • Kunal Talwar, Abhradeep Guha Thakurta, Li Zhang
In addition, we show that this error bound is nearly optimal amongst all differentially private algorithms.
1 code implementation • 20 Nov 2014 • Kunal Talwar, Abhradeep Thakurta, Li Zhang
In addition, we show that when the loss function is Lipschitz with respect to the $\ell_1$ norm and $\mathcal{C}$ is $\ell_1$-bounded, a differentially private version of the Frank-Wolfe algorithm gives error bounds of the form $\tilde{O}(n^{-2/3})$.
1 code implementation • 1 May 2014 • Cynthia Dwork, Kunal Talwar, Abhradeep Thakurta, Li Zhang
We show that the well-known, but misnamed, randomized response algorithm, with properly tuned parameters, provides a nearly optimal additive quality gap compared to the best possible singular subspace of A.
no code implementations • 26 Mar 2009 • Anupam Gupta, Katrina Ligett, Frank McSherry, Aaron Roth, Kunal Talwar
Is it even possible to design good algorithms for this problem that preserve the privacy of the clients?
Data Structures and Algorithms Cryptography and Security Computer Science and Game Theory