Local Differential Privacy in Decentralized Optimization

16 Feb 2019  ·  Hanshen Xiao, Yu Ye, Srinivas Devadas ·

Privacy concerns with sensitive data are receiving increasing attention. In this paper, we study local differential privacy (LDP) in interactive decentralized optimization. By constructing random local aggregators, we propose a framework to amplify LDP by a constant. We take Alternating Direction Method of Multipliers (ADMM), and decentralized gradient descent as two concrete examples, where experiments support our theory. In an asymptotic view, we address the following question: Under LDP, is it possible to design a distributed private minimizer for arbitrary closed convex constraints with utility loss not explicitly dependent on dimensionality? As an affiliated result, we also show that with merely linear secret sharing, information theoretic privacy is achievable for bounded colluding agents.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here