Preserving Differential Privacy in Adversarial Learning with Provable Robustness

23 Mar 2019  ·  NhatHai Phan, My T. Thai, Ruoming Jin, Han Hu, Dejing Dou ·

In this paper, we aim to develop a novel mechanism to preserve differential privacy (DP) in adversarial learning for deep neural networks, with provable robustness to adversarial examples. We leverage the sequential composition theory in differential privacy, to establish a new connection between differential privacy preservation and provable robustness. To address the trade-off among model utility, privacy loss, and robustness, we design an original, differentially private, adversarial objective function, based on the post-processing property in differential privacy, to tighten the sensitivity of our model. Theoretical analysis and thorough evaluations show that our mechanism notably improves the robustness of DP deep neural networks.

PDF Abstract
No code implementations yet. Submit your code now

Categories


Cryptography and Security

Datasets