no code implementations • 27 Jun 2024 • Vitaly Feldman, Audra McMillan, Satchit Sivakumar, Kunal Talwar
For distributions $P$ over $\mathbb{R}$, we consider a strong notion of instance-optimality: an algorithm that uniformly achieves the instance-optimal estimation rate is competitive with an algorithm that is told that the distribution is either $P$ or $Q_P$ for some distribution $Q_P$ whose probability density function (pdf) is within a factor of 2 of the pdf of $P$.
no code implementations • 28 Jul 2023 • Rachel Cummings, Vitaly Feldman, Audra McMillan, Kunal Talwar
In this work we propose a simple model of heterogeneous user data that allows user data to differ in both distribution and quantity of data, and provide a method for estimating the population-level mean while preserving user-level differential privacy.
no code implementations • 27 Jul 2023 • Kunal Talwar, Shan Wang, Audra McMillan, Vojta Jina, Vitaly Feldman, Pansy Bansal, Bailey Basile, Aine Cahill, Yi Sheng Chan, Mike Chatzidakis, Junye Chen, Oliver Chick, Mona Chitnis, Suman Ganta, Yusuf Goren, Filip Granqvist, Kristine Guo, Frederic Jacobs, Omid Javidbakht, Albert Liu, Richard Low, Dan Mascenik, Steve Myers, David Park, Wonhee Park, Gianni Parsa, Tommy Pauly, Christian Priebe, Rehan Rishi, Guy Rothblum, Michael Scaria, Linmao Song, Congzheng Song, Karl Tarbe, Sebastian Vogt, Luke Winstrom, Shundong Zhou
This gap has led to significant interest in the design and implementation of simple cryptographic primitives, that can allow central-like utility guarantees without having to trust a central server.
no code implementations • 21 Jul 2023 • Karan Chadha, Junye Chen, John Duchi, Vitaly Feldman, Hanieh Hashemi, Omid Javidbakht, Audra McMillan, Kunal Talwar
In this work, we study practical heuristics to improve the performance of prefix-tree based algorithms for differentially private heavy hitter detection.
no code implementations • 28 Oct 2022 • Audra McMillan, Adam Smith, Jon Ullman
In this work, we study local minimax convergence estimation rates subject to $\epsilon$-differential privacy.
no code implementations • 9 Aug 2022 • Vitaly Feldman, Audra McMillan, Kunal Talwar
Our second contribution is a new analysis of privacy amplification by shuffling.
1 code implementation • 18 Jun 2021 • Joerg Drechsler, Ira Globus-Harris, Audra McMillan, Jayshree Sarathy, Adam Smith
Differential privacy is a restriction on data processing algorithms that provides strong confidentiality guarantees for individual records in the data.
1 code implementation • 23 Dec 2020 • Vitaly Feldman, Audra McMillan, Kunal Talwar
As a direct corollary of our analysis we derive a simple and nearly optimal algorithm for frequency estimation in the shuffle model of privacy.
no code implementations • 24 Jul 2020 • Mark Bun, Jörg Drechsler, Marco Gaboardi, Audra McMillan, Jayshree Sarathy
Sampling schemes are fundamental tools in statistics, survey design, and algorithm design.
no code implementations • 10 Jul 2020 • Daniel Alabi, Audra McMillan, Jayshree Sarathy, Adam Smith, Salil Vadhan
Economics and social science research often require analyzing datasets of sensitive personal information at fine granularity, with models fit to small subsets of the data.
no code implementations • NeurIPS 2020 • Clément L. Canonne, Gautam Kamath, Audra McMillan, Jonathan Ullman, Lydia Zakynthinou
In this work we present novel differentially private identity (goodness-of-fit) testers for natural and widely studied classes of multivariate product distributions: Gaussians in $\mathbb{R}^d$ with known covariance and product distributions over $\{\pm 1\}^{d}$.
no code implementations • 27 Nov 2018 • Clément L. Canonne, Gautam Kamath, Audra McMillan, Adam Smith, Jonathan Ullman
Specifically, we characterize this sample complexity up to constant factors in terms of the structure of $P$ and $Q$ and the privacy level $\varepsilon$, and show that this sample complexity is achieved by a certain randomized and clamped variant of the log-likelihood ratio test.
no code implementations • NeurIPS 2019 • Jacob Abernethy, Young Hun Jung, Chansoo Lee, Audra McMillan, Ambuj Tewari
In this paper, we use differential privacy as a lens to examine online learning in both full and partial information settings.
no code implementations • 7 Apr 2016 • Audra McMillan, Adam Smith
We provide a lower bound on the accuracy of estimators for block graphons with a large number of blocks.