Privacy Preserving Deep Learning

26 papers with code • 0 benchmarks • 3 datasets

The goal of privacy-preserving (deep) learning is to train a model while preserving privacy of the training dataset. Typically, it is understood that the trained model should be privacy-preserving (e.g., due to the training algorithm being differentially private).

Latest papers with no code

Converting Transformers to Polynomial Form for Secure Inference Over Homomorphic Encryption

no code yet • 15 Nov 2023

This innovation enables us to perform secure inference on LMs with WikiText-103.

The Paradox of Noise: An Empirical Study of Noise-Infusion Mechanisms to Improve Generalization, Stability, and Privacy in Federated Learning

no code yet • 9 Nov 2023

In a data-centric era, concerns regarding privacy and ethical data handling grow as machine learning relies more on personal information.

Generative Model-Based Attack on Learnable Image Encryption for Privacy-Preserving Deep Learning

no code yet • 9 Mar 2023

By taking advantage of leaked information from encrypted images, we propose a guided generative model as an attack on learnable image encryption to recover personally identifiable visual information.

Training Differentially Private Graph Neural Networks with Random Walk Sampling

no code yet • 2 Jan 2023

We propose to solve this issue by training graph neural networks on disjoint subgraphs of a given training graph.

Privacy-preserving Deep Learning based Record Linkage

no code yet • 3 Nov 2022

The global model is then used by a linkage unit to distinguish unlabelled record pairs as matches and non-matches.

Review Learning: Alleviating Catastrophic Forgetting with Generative Replay without Generator

no code yet • 17 Oct 2022

When a deep learning model is sequentially trained on different datasets, it forgets the knowledge acquired from previous data, a phenomenon known as catastrophic forgetting.

Privacy-Preserving Deep Learning Model for Covid-19 Disease Detection

no code yet • 7 Sep 2022

The dataset from the Kaggle website is used to evaluate the designed model for COVID-19 detection.

Securing the Classification of COVID-19 in Chest X-ray Images: A Privacy-Preserving Deep Learning Approach

no code yet • 15 Mar 2022

In this paper, we propose privacy-preserving deep learning (PPDL)-based approach to secure the classification of Chest X-ray images.

Communication-Efficient Federated Distillation with Active Data Sampling

no code yet • 14 Mar 2022

Federated Distillation (FD) is a recently proposed alternative to enable communication-efficient and robust FL, which achieves orders of magnitude reduction of the communication overhead compared with FedAvg and is flexible to handle heterogeneous models at the clients.

DP-FP: Differentially Private Forward Propagation for Large Models

no code yet • 29 Dec 2021

Our DP-FP employs novel (1) representation clipping followed by noise addition in the forward propagation stage, as well as (2) micro-batch construction via subsampling to achieve DP amplification and reduce noise power to $1/M$, where $M$ is the number of micro-batch in a step.