Privacy Preserving Deep Learning
23 papers with code • 0 benchmarks • 3 datasets
The goal of privacy-preserving (deep) learning is to train a model while preserving privacy of the training dataset. Typically, it is understood that the trained model should be privacy-preserving (e.g., due to the training algorithm being differentially private).
These leaderboards are used to track progress in Privacy Preserving Deep Learning
Most implemented papers
Privacy-Preserving Deep Action Recognition: An Adversarial Learning Framework and A New Dataset
We first discuss an innovative heuristic of cross-dataset training and evaluation, enabling the use of multiple single-task datasets (one with target task labels and the other with privacy labels) in our problem.
A generic framework for privacy preserving deep learning
We detail a new framework for privacy preserving deep learning and discuss its assets.
Locally Differentially Private (Contextual) Bandits Learning
We study locally differentially private (LDP) bandits learning in this paper.
Sisyphus: A Cautionary Tale of Using Low-Degree Polynomial Activations in Privacy-Preserving Deep Learning
In this work, we ask: Is it feasible to substitute all ReLUs with low-degree polynomial activation functions for building deep, privacy-friendly neural networks?
Towards Fair and Privacy-Preserving Federated Deep Models
This problem can be addressed by either a centralized framework that deploys a central server to train a global model on the joint data from all parties, or a distributed framework that leverages a parameter server to aggregate local model updates.
Fawkes: Protecting Privacy against Unauthorized Deep Learning Models
In this paper, we propose Fawkes, a system that helps individuals inoculate their images against unauthorized facial recognition models.
ARIANN: Low-Interaction Privacy-Preserving Deep Learning via Function Secret Sharing
We evaluate our end-to-end system for private inference between distant servers on standard neural networks such as AlexNet, VGG16 or ResNet18, and for private training on smaller networks like LeNet.
Locally Private Graph Neural Networks
In this paper, we study the problem of node data privacy, where graph nodes have potentially sensitive data that is kept private, but they could be beneficial for a central server for training a GNN over the graph.
Tempered Sigmoid Activations for Deep Learning with Differential Privacy
Because learning sometimes involves sensitive data, machine learning algorithms have been extended to offer privacy for training data.
Secure Data Sharing With Flow Model
In the classical multi-party computation setting, multiple parties jointly compute a function without revealing their own input data.