Privacy Preserving Deep Learning
26 papers with code • 0 benchmarks • 3 datasets
The goal of privacy-preserving (deep) learning is to train a model while preserving privacy of the training dataset. Typically, it is understood that the trained model should be privacy-preserving (e.g., due to the training algorithm being differentially private).
Benchmarks
These leaderboards are used to track progress in Privacy Preserving Deep Learning
Latest papers
Privacy-Preserving Deep Learning Using Deformable Operators for Secure Task Learning
To address these challenges, we propose a novel Privacy-Preserving framework that uses a set of deformable operators for secure task learning.
Mind the Gap: Federated Learning Broadens Domain Generalization in Diagnostic AI Models
So far, the impact of training strategy, i. e., local versus collaborative, on the diagnostic on-domain and off-domain performance of AI models interpreting chest radiographs has not been assessed.
Split Without a Leak: Reducing Privacy Leakage in Split Learning
The idea behind it is that the client encrypts the activation map (the output of the split layer between the client and the server) before sending it to the server.
Private, fair and accurate: Training large-scale, privacy-preserving AI models in medical imaging
In this work, we evaluated the effect of privacy-preserving training of AI models regarding accuracy and fairness compared to non-private training.
Memorization of Named Entities in Fine-tuned BERT Models
One such risk is training data extraction from language models that have been trained on datasets, which contain personal and privacy sensitive information.
Collaborative Training of Medical Artificial Intelligence Models with non-uniform Labels
Due to the rapid advancements in recent years, medical image analysis is largely dominated by deep learning (DL).
Privacy in Practice: Private COVID-19 Detection in X-Ray Images (Extended Version)
The introduced DP should help limit leakage threats posed by MIAs, and our practical analysis is the first to test this hypothesis on the COVID-19 classification task.
Bottlenecks CLUB: Unifying Information-Theoretic Trade-offs Among Complexity, Leakage, and Utility
In this work, we propose a general family of optimization problems, termed as complexity-leakage-utility bottleneck (CLUB) model, which (i) provides a unified theoretical framework that generalizes most of the state-of-the-art literature for the information-theoretic privacy models, (ii) establishes a new interpretation of the popular generative and discriminative models, (iii) constructs new insights to the generative compression models, and (iv) can be used in the fair generative models.
Backpropagation Clipping for Deep Learning with Differential Privacy
We present backpropagation clipping, a novel variant of differentially private stochastic gradient descent (DP-SGD) for privacy-preserving deep learning.
Homogeneous Learning: Self-Attention Decentralized Deep Learning
To this end, we propose a decentralized learning model called Homogeneous Learning (HL) for tackling non-IID data with a self-attention mechanism.