no code implementations • 13 Dec 2023 • Sebastian O. Jordan, Qiongxiu Li, Richard Heusdens
The main idea is to exploit a certain structure in the update equations for noise insertion such that the private data is protected without compromising the algorithm's accuracy.
no code implementations • 16 Sep 2022 • Qiongxiu Li, Jaron Skovsted Gundersen, Katrine Tjell, Rafal Wisniewski, Mads Græsbøll Christensen
Privacy has become a major concern in machine learning.
1 code implementation • 17 Aug 2022 • Xiao Li, Qiongxiu Li, Zhanhao Hu, Xiaolin Hu
We demonstrate that the generalization gap and privacy leakage are less correlated than those of the previous results.
1 code implementation • 29 Apr 2020 • Qiongxiu Li, Richard Heusdens, Mads Græsbøll Christensen
We therefore propose to insert noise in the non-convergent subspace through the dual variable such that the private data are protected, and the accuracy of the desired solution is completely unaffected.