Regularized Loss Minimizers with Local Data Perturbation: Consistency and Data Irrecoverability

19 May 2018  ·  Zitao Li, Jean Honorio ·

We introduce a new concept, data irrecoverability, and show that the well-studied concept of data privacy is sufficient but not necessary for data irrecoverability. We show that there are several regularized loss minimization problems that can use perturbed data with theoretical guarantees of generalization, i.e., loss consistency. Our results quantitatively connect the convergence rates of the learning problems to the impossibility for any adversary for recovering the original data from perturbed observations. In addition, we show several examples where the convergence rates with perturbed data only increase the convergence rates with original data within a constant factor related to the amount of perturbation, i.e., noise.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here