1 code implementation • 1 May 2022 • Héber H. Arcolezi, Jean-François Couchot, Denis Renaud, Bechara Al Bouna, Xiaokui Xiao
As shown in the results, differentially private deep learning models trained under gradient or input perturbation achieve nearly the same performance as non-private deep learning models, with loss in performance varying between $0. 57\%$ to $2. 8\%$.