Gaussian Differential Privacy Transformation: from identification to application

29 Sep 2021  ·  Yi Liu, Ke Sun, Bei Jiang, Linglong Kong ·

Gaussian differential privacy (GDP) is a single-parameter family of privacy notions that provides coherent guarantees to avoid the exposure of individuals from machine learning models. Relative to traditional $(\epsilon,\delta)$-differential privacy (DP), GDP is more interpretable and tightens the bounds given by standard DP composition theorems. In this paper, we start with an exact privacy profile characterization of $(\epsilon,\delta)$-DP and then define an efficient, tractable, and visualizable tool, called the Gaussian differential privacy transformation (GDPT). With theoretical property of the GDPT, we develop an easy-to-verify criterion to characterize and identify GDP algorithms. Based on our criterion, an algorithm is GDP if and only if an asymptotic condition on its privacy profile is met. By development of numerical properties of the GDPT, we give a method to narrow down possible values of an optimal privacy measurement $\mu$ with an arbitrarily small and quantifiable margin of error. As applications of our newly developed tools, we revisit some established \ed-DP algorithms and find that their utility can be improved. We additionally make a comparison between two single-parameter families of privacy notions, $\epsilon$-DP and $\mu$-GDP. Lastly, we use the GDPT to examine the effect of subsampling under the GDP framework.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here