no code implementations • 11 Feb 2024 • Karan Chadha, John Duchi, Rohith Kuditipudi
We consider the task of constructing confidence intervals with differential privacy.
no code implementations • 21 Jul 2023 • Karan Chadha, Junye Chen, John Duchi, Vitaly Feldman, Hanieh Hashemi, Omid Javidbakht, Audra McMillan, Kunal Talwar
In this work, we study practical heuristics to improve the performance of prefix-tree based algorithms for differentially private heavy hitter detection.
no code implementations • 31 Oct 2022 • Hilal Asi, Karan Chadha, Gary Cheng, John Duchi
In non-private stochastic convex optimization, stochastic gradient methods converge much faster on interpolation problems -- problems where there exists a solution that simultaneously minimizes all of the sample losses -- than on non-interpolating ones; we show that generally similar improvements are impossible in the private setting.
no code implementations • 16 Aug 2021 • Gary Cheng, Karan Chadha, John Duchi
We propose an asymptotic framework to analyze the performance of (personalized) federated learning algorithms.
no code implementations • 7 Jan 2021 • Karan Chadha, Gary Cheng, John C. Duchi
We extend the Approximate-Proximal Point (aProx) family of model-based methods for solving stochastic convex optimization problems, including stochastic subgradient, proximal point, and bundle methods, to the minibatch and accelerated setting.
no code implementations • NeurIPS 2020 • Hilal Asi, Karan Chadha, Gary Cheng, John C. Duchi
In contrast to standard stochastic gradient methods, these methods may have linear speedup in the minibatch setting even for non-smooth functions.