Privacy Preserving Deep Learning

26 papers with code • 0 benchmarks • 3 datasets

The goal of privacy-preserving (deep) learning is to train a model while preserving privacy of the training dataset. Typically, it is understood that the trained model should be privacy-preserving (e.g., due to the training algorithm being differentially private).

Privacy-Preserving Deep Learning Using Deformable Operators for Secure Task Learning

factral/privdl 8 Apr 2024

To address these challenges, we propose a novel Privacy-Preserving framework that uses a set of deformable operators for secure task learning.

3
08 Apr 2024

Mind the Gap: Federated Learning Broadens Domain Generalization in Diagnostic AI Models

tayebiarasteh/fldomain 1 Oct 2023

So far, the impact of training strategy, i. e., local versus collaborative, on the diagnostic on-domain and off-domain performance of AI models interpreting chest radiographs has not been assessed.

0
01 Oct 2023

Split Without a Leak: Reducing Privacy Leakage in Split Learning

khoaguin/hesplitnet 30 Aug 2023

The idea behind it is that the client encrypts the activation map (the output of the split layer between the client and the server) before sending it to the server.

8
30 Aug 2023

Private, fair and accurate: Training large-scale, privacy-preserving AI models in medical imaging

TUM-AIMED/2.5DAttention 3 Feb 2023

In this work, we evaluated the effect of privacy-preserving training of AI models regarding accuracy and fairness compared to non-private training.

3
03 Feb 2023

Memorization of Named Entities in Fine-tuned BERT Models

drndr/bert_ent_attack 7 Dec 2022

One such risk is training data extraction from language models that have been trained on datasets, which contain personal and privacy sensitive information.

1
07 Dec 2022

Collaborative Training of Medical Artificial Intelligence Models with non-uniform Labels

tayebiarasteh/chestx 24 Nov 2022

Due to the rapid advancements in recent years, medical image analysis is largely dominated by deep learning (DL).

3
24 Nov 2022

Privacy in Practice: Private COVID-19 Detection in X-Ray Images (Extended Version)

luckyos-code/mia-covid 21 Nov 2022

The introduced DP should help limit leakage threats posed by MIAs, and our practical analysis is the first to test this hypothesis on the COVID-19 classification task.

2
21 Nov 2022

Bottlenecks CLUB: Unifying Information-Theoretic Trade-offs Among Complexity, Leakage, and Utility

BehroozRazeghi/CLUB 11 Jul 2022

In this work, we propose a general family of optimization problems, termed as complexity-leakage-utility bottleneck (CLUB) model, which (i) provides a unified theoretical framework that generalizes most of the state-of-the-art literature for the information-theoretic privacy models, (ii) establishes a new interpretation of the popular generative and discriminative models, (iii) constructs new insights to the generative compression models, and (iv) can be used in the fair generative models.

4
11 Jul 2022

Backpropagation Clipping for Deep Learning with Differential Privacy

uvm-plaid/backpropagation-clipping 10 Feb 2022

We present backpropagation clipping, a novel variant of differentially private stochastic gradient descent (DP-SGD) for privacy-preserving deep learning.

5
10 Feb 2022

Homogeneous Learning: Self-Attention Decentralized Deep Learning

yuweisunn/homogeneous-learning 11 Oct 2021

To this end, we propose a decentralized learning model called Homogeneous Learning (HL) for tackling non-IID data with a self-attention mechanism.

6
11 Oct 2021