Privacy Preserving Deep Learning

26 papers with code • 0 benchmarks • 3 datasets

The goal of privacy-preserving (deep) learning is to train a model while preserving privacy of the training dataset. Typically, it is understood that the trained model should be privacy-preserving (e.g., due to the training algorithm being differentially private).

Most implemented papers

Secure Data Sharing With Flow Model

duchenzhuang/flowencrypt 24 Sep 2020

In the classical multi-party computation setting, multiple parties jointly compute a function without revealing their own input data.

Can we Generalize and Distribute Private Representation Learning?

shams-sam/privacygans 5 Oct 2020

We study the problem of learning representations that are private yet informative, i. e., provide information about intended "ally" targets while hiding sensitive "adversary" attributes.

CryptGPU: Fast Privacy-Preserving Machine Learning on the GPU

jeffreysijuntan/cryptgpu 22 Apr 2021

We then identify a sequence of "GPU-friendly" cryptographic protocols to enable privacy-preserving evaluation of both linear and non-linear operations on the GPU.

Variational Leakage: The Role of Information Complexity in Privacy Leakage

BehroozRazeghi/Variational-Leakage 5 Jun 2021

We study the role of information complexity in privacy leakage about an attribute of an adversary's interest, which is not known a priori to the system designer.

Antipodes of Label Differential Privacy: PATE and ALIBI

facebookresearch/label_dp_antipodes NeurIPS 2021

We propose two novel approaches based on, respectively, the Laplace mechanism and the PATE framework, and demonstrate their effectiveness on standard benchmarks.

Towards Secure and Practical Machine Learning via Secret Sharing and Random Permutation

zfscgy/Amber 17 Aug 2021

Since our method reduces the cost for element-wise function computation, it is more efficient than existing cryptographic methods.

Homogeneous Learning: Self-Attention Decentralized Deep Learning

yuweisunn/homogeneous-learning 11 Oct 2021

To this end, we propose a decentralized learning model called Homogeneous Learning (HL) for tackling non-IID data with a self-attention mechanism.

Backpropagation Clipping for Deep Learning with Differential Privacy

uvm-plaid/backpropagation-clipping 10 Feb 2022

We present backpropagation clipping, a novel variant of differentially private stochastic gradient descent (DP-SGD) for privacy-preserving deep learning.

Bottlenecks CLUB: Unifying Information-Theoretic Trade-offs Among Complexity, Leakage, and Utility

BehroozRazeghi/CLUB 11 Jul 2022

In this work, we propose a general family of optimization problems, termed as complexity-leakage-utility bottleneck (CLUB) model, which (i) provides a unified theoretical framework that generalizes most of the state-of-the-art literature for the information-theoretic privacy models, (ii) establishes a new interpretation of the popular generative and discriminative models, (iii) constructs new insights to the generative compression models, and (iv) can be used in the fair generative models.