Fast Estimation for Privacy and Utility in Differentially Private Machine Learning

1 Jan 2021  ·  Yuzhe Li, Yong liu, Weipinng Wang, Bo Li, Nan Liu ·

Recently, differential privacy has been widely studied in machine learning due to its formal privacy guarantees for data analysis. As one of the most important parameters of differential privacy, $\epsilon$ controls the crucial tradeoff between the strength of the privacy guarantee and the utility of model. Therefore, the choice of $\epsilon$ has a great influence on the performance of differentially private learning models. But so far, there is still no rigorous method for choosing $\epsilon$. In this paper, we deduce the influence of $\epsilon$ on utility private learning models through strict mathematical derivation, and propose a novel approximate approach for estimating the utility of any $\epsilon$ value. We show that our approximate approach has a fairly small error and can be used to estimate the optimal $\epsilon$ according to the expected utility of users. Experimental results demonstrate high estimation accuracy and broad applicability of our approximate approach.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here