Search Results for author: Karan Chadha

Found 6 papers, 0 papers with code

Resampling methods for Private Statistical Inference

no code implementations11 Feb 2024 Karan Chadha, John Duchi, Rohit Kuditipudi

We consider the task of constructing confidence intervals with differential privacy.

Differentially Private Heavy Hitter Detection using Federated Analytics

no code implementations21 Jul 2023 Karan Chadha, Junye Chen, John Duchi, Vitaly Feldman, Hanieh Hashemi, Omid Javidbakht, Audra McMillan, Kunal Talwar

In this work, we study practical heuristics to improve the performance of prefix-tree based algorithms for differentially private heavy hitter detection.

Private optimization in the interpolation regime: faster rates and hardness results

no code implementations31 Oct 2022 Hilal Asi, Karan Chadha, Gary Cheng, John Duchi

In non-private stochastic convex optimization, stochastic gradient methods converge much faster on interpolation problems -- problems where there exists a solution that simultaneously minimizes all of the sample losses -- than on non-interpolating ones; we show that generally similar improvements are impossible in the private setting.

Federated Asymptotics: a model to compare federated learning algorithms

no code implementations16 Aug 2021 Gary Cheng, Karan Chadha, John Duchi

We propose an asymptotic framework to analyze the performance of (personalized) federated learning algorithms.

Meta-Learning Personalized Federated Learning

Accelerated, Optimal, and Parallel: Some Results on Model-Based Stochastic Optimization

no code implementations7 Jan 2021 Karan Chadha, Gary Cheng, John C. Duchi

We extend the Approximate-Proximal Point (aProx) family of model-based methods for solving stochastic convex optimization problems, including stochastic subgradient, proximal point, and bundle methods, to the minibatch and accelerated setting.

Stochastic Optimization

Minibatch Stochastic Approximate Proximal Point Methods

no code implementations NeurIPS 2020 Hilal Asi, Karan Chadha, Gary Cheng, John C. Duchi

In contrast to standard stochastic gradient methods, these methods may have linear speedup in the minibatch setting even for non-smooth functions.

Cannot find the paper you are looking for? You can Submit a new open access paper.