Search Results for author: Arun Ganesh

Found 8 papers, 0 papers with code

Tight Group-Level DP Guarantees for DP-SGD with Sampling via Mixture of Gaussians Mechanisms

no code implementations17 Jan 2024 Arun Ganesh

We give a procedure for computing group-level $(\epsilon, \delta)$-DP guarantees for DP-SGD, when using Poisson sampling or fixed batch size sampling.

Privacy Amplification for Matrix Mechanisms

no code implementations24 Oct 2023 Christopher A. Choquette-Choo, Arun Ganesh, Thomas Steinke, Abhradeep Thakurta

In this paper, we propose "MMCC", the first algorithm to analyze privacy amplification via sampling for any generic matrix mechanism.

Correlated Noise Provably Beats Independent Noise for Differentially Private Learning

no code implementations10 Oct 2023 Christopher A. Choquette-Choo, Krishnamurthy Dvijotham, Krishna Pillutla, Arun Ganesh, Thomas Steinke, Abhradeep Thakurta

We characterize the asymptotic learning utility for any choice of the correlation function, giving precise analytical bounds for linear regression and as the solution to a convex program for general convex functions.

Why Is Public Pretraining Necessary for Private Model Training?

no code implementations19 Feb 2023 Arun Ganesh, Mahdi Haghifam, Milad Nasr, Sewoong Oh, Thomas Steinke, Om Thakkar, Abhradeep Thakurta, Lun Wang

To explain this phenomenon, we hypothesize that the non-convex loss landscape of a model training necessitates an optimization algorithm to go through two phases.

Transfer Learning

Recycling Scraps: Improving Private Learning by Leveraging Intermediate Checkpoints

no code implementations4 Oct 2022 Virat Shejwalkar, Arun Ganesh, Rajiv Mathews, Om Thakkar, Abhradeep Thakurta

Empirically, we show that the last few checkpoints can provide a reasonable lower bound for the variance of a converged DP model.

Differentially Private Sampling from Rashomon Sets, and the Universality of Langevin Diffusion for Convex Optimization

no code implementations4 Apr 2022 Arun Ganesh, Abhradeep Thakurta, Jalaj Upadhyay

In this paper we provide an algorithmic framework based on Langevin diffusion (LD) and its corresponding discretizations that allow us to simultaneously obtain: i) An algorithm for sampling from the exponential mechanism, whose privacy analysis does not depend on convexity and which can be stopped at anytime without compromising privacy, and ii) tight uniform stability guarantees for the exponential mechanism.

Fairness

Public Data-Assisted Mirror Descent for Private Model Training

no code implementations1 Dec 2021 Ehsan Amid, Arun Ganesh, Rajiv Mathews, Swaroop Ramaswamy, Shuang Song, Thomas Steinke, Vinith M. Suriyakumar, Om Thakkar, Abhradeep Thakurta

In this paper, we revisit the problem of using in-distribution public data to improve the privacy/utility trade-offs for differentially private (DP) model training.

Federated Learning

Faster Differentially Private Samplers via Rényi Divergence Analysis of Discretized Langevin MCMC

no code implementations NeurIPS 2020 Arun Ganesh, Kunal Talwar

Various differentially private algorithms instantiate the exponential mechanism, and require sampling from the distribution $\exp(-f)$ for a suitable function $f$.

Cannot find the paper you are looking for? You can Submit a new open access paper.