Privacy Preserving Deep Learning
26 papers with code • 0 benchmarks • 3 datasets
The goal of privacy-preserving (deep) learning is to train a model while preserving privacy of the training dataset. Typically, it is understood that the trained model should be privacy-preserving (e.g., due to the training algorithm being differentially private).
Benchmarks
These leaderboards are used to track progress in Privacy Preserving Deep Learning
Latest papers with no code
Converting Transformers to Polynomial Form for Secure Inference Over Homomorphic Encryption
This innovation enables us to perform secure inference on LMs with WikiText-103.
The Paradox of Noise: An Empirical Study of Noise-Infusion Mechanisms to Improve Generalization, Stability, and Privacy in Federated Learning
In a data-centric era, concerns regarding privacy and ethical data handling grow as machine learning relies more on personal information.
Generative Model-Based Attack on Learnable Image Encryption for Privacy-Preserving Deep Learning
By taking advantage of leaked information from encrypted images, we propose a guided generative model as an attack on learnable image encryption to recover personally identifiable visual information.
Training Differentially Private Graph Neural Networks with Random Walk Sampling
We propose to solve this issue by training graph neural networks on disjoint subgraphs of a given training graph.
Privacy-preserving Deep Learning based Record Linkage
The global model is then used by a linkage unit to distinguish unlabelled record pairs as matches and non-matches.
Review Learning: Alleviating Catastrophic Forgetting with Generative Replay without Generator
When a deep learning model is sequentially trained on different datasets, it forgets the knowledge acquired from previous data, a phenomenon known as catastrophic forgetting.
Privacy-Preserving Deep Learning Model for Covid-19 Disease Detection
The dataset from the Kaggle website is used to evaluate the designed model for COVID-19 detection.
Securing the Classification of COVID-19 in Chest X-ray Images: A Privacy-Preserving Deep Learning Approach
In this paper, we propose privacy-preserving deep learning (PPDL)-based approach to secure the classification of Chest X-ray images.
Communication-Efficient Federated Distillation with Active Data Sampling
Federated Distillation (FD) is a recently proposed alternative to enable communication-efficient and robust FL, which achieves orders of magnitude reduction of the communication overhead compared with FedAvg and is flexible to handle heterogeneous models at the clients.
DP-FP: Differentially Private Forward Propagation for Large Models
Our DP-FP employs novel (1) representation clipping followed by noise addition in the forward propagation stage, as well as (2) micro-batch construction via subsampling to achieve DP amplification and reduce noise power to $1/M$, where $M$ is the number of micro-batch in a step.