Privacy Preserving Deep Learning

25 papers with code • 0 benchmarks • 3 datasets

The goal of privacy-preserving (deep) learning is to train a model while preserving privacy of the training dataset. Typically, it is understood that the trained model should be privacy-preserving (e.g., due to the training algorithm being differentially private).

Most implemented papers

Privacy-Preserving Deep Action Recognition: An Adversarial Learning Framework and A New Dataset

VITA-Group/PA-HMDB51 12 Jun 2019

We first discuss an innovative heuristic of cross-dataset training and evaluation, enabling the use of multiple single-task datasets (one with target task labels and the other with privacy labels) in our problem.

A generic framework for privacy preserving deep learning

OpenMined/PySyft 9 Nov 2018

We detail a new framework for privacy preserving deep learning and discuss its assets.

Locally Differentially Private (Contextual) Bandits Learning

huang-research-group/LDPbandit2020 NeurIPS 2020

We study locally differentially private (LDP) bandits learning in this paper.

Sisyphus: A Cautionary Tale of Using Low-Degree Polynomial Activations in Privacy-Preserving Deep Learning

kvgarimella/sisyphus-ppml 26 Jul 2021

In this work, we ask: Is it feasible to substitute all ReLUs with low-degree polynomial activation functions for building deep, privacy-friendly neural networks?

Private, fair and accurate: Training large-scale, privacy-preserving AI models in medical imaging

TUM-AIMED/2.5DAttention 3 Feb 2023

In this work, we evaluated the effect of privacy-preserving training of AI models regarding accuracy and fairness compared to non-private training.

Towards Fair and Privacy-Preserving Federated Deep Models

lingjuanlv/FPPDL 4 Jun 2019

This problem can be addressed by either a centralized framework that deploys a central server to train a global model on the joint data from all parties, or a distributed framework that leverages a parameter server to aggregate local model updates.

Fawkes: Protecting Privacy against Unauthorized Deep Learning Models

Shawn-Shan/fawkes 19 Feb 2020

In this paper, we propose Fawkes, a system that helps individuals inoculate their images against unauthorized facial recognition models.

ARIANN: Low-Interaction Privacy-Preserving Deep Learning via Function Secret Sharing

LaRiffle/AriaNN 8 Jun 2020

We evaluate our end-to-end system for private inference between distant servers on standard neural networks such as AlexNet, VGG16 or ResNet18, and for private training on smaller networks like LeNet.

Locally Private Graph Neural Networks

sisaman/lpgnn 9 Jun 2020

In this paper, we study the problem of node data privacy, where graph nodes have potentially sensitive data that is kept private, but they could be beneficial for a central server for training a GNN over the graph.

Tempered Sigmoid Activations for Deep Learning with Differential Privacy

woodyx218/opacus_global_clipping 28 Jul 2020

Because learning sometimes involves sensitive data, machine learning algorithms have been extended to offer privacy for training data.