We detail a new framework for privacy preserving deep learning and discuss its assets.
In this paper, we propose Fawkes, a system that helps individuals inoculate their images against unauthorized facial recognition models.
We study locally differentially private (LDP) bandits learning in this paper.
We first discuss an innovative heuristic of cross-dataset training and evaluation, enabling the use of multiple single-task datasets (one with target task labels and the other with privacy labels) in our problem.
This problem can be addressed by either a centralized framework that deploys a central server to train a global model on the joint data from all parties, or a distributed framework that leverages a parameter server to aggregate local model updates.
As a result and for the first time, in this paper, we study the problem of node data privacy, where graph nodes have potentially sensitive data that is kept private, but they could be beneficial for a central server for training a GNN over the graph.
Differential Privacy (DP) is the leading approach to privacy preserving deep learning.