Image Classification with Differential Privacy

7 papers with code • 1 benchmarks • 1 datasets

Image Classification with Differential Privacy is an improved version of the image classification task whereby the final classification output only describe the patterns of groups within the dataset while withholding information about individuals in the dataset.

Datasets


Most implemented papers

Unlocking High-Accuracy Differentially Private Image Classification through Scale

deepmind/jax_privacy 28 Apr 2022

Differential Privacy (DP) provides a formal privacy guarantee preventing adversaries with access to a machine learning model from extracting information about individual training points.

Private, fair and accurate: Training large-scale, privacy-preserving AI models in medical imaging

TUM-AIMED/2.5DAttention 3 Feb 2023

In this work, we evaluated the effect of privacy-preserving training of AI models regarding accuracy and fairness compared to non-private training.

Toward Training at ImageNet Scale with Differential Privacy

google-research/dp-imagenet 28 Jan 2022

Despite a rich literature on how to train ML models with differential privacy, it remains extremely challenging to train real-life, large neural networks with both reasonable accuracy and privacy.

SmoothNets: Optimizing CNN architecture design for differentially private deep learning

NiWaRe/DPBenchmark 9 May 2022

The arguably most widely employed algorithm to train deep neural networks with Differential Privacy is DPSGD, which requires clipping and noising of per-sample gradients.

TAN Without a Burn: Scaling Laws of DP-SGD

facebookresearch/tan 7 Oct 2022

Differentially Private methods for training Deep Neural Networks (DNNs) have progressed recently, in particular with the use of massive batches and aggregated data augmentations for a large number of training steps.

Equivariant Differentially Private Deep Learning: Why DP-SGD Needs Sparser Models

hlzl/equivariant 30 Jan 2023

We achieve such sparsity by design by introducing equivariant convolutional networks for model training with Differential Privacy.

Preserving privacy in domain transfer of medical AI models comes at no performance costs: The integral role of differential privacy

tayebiarasteh/privacydomain 10 Jun 2023

We specifically investigate the performance of models trained with DP as compared to models trained without DP on data from institutions that the model had not seen during its training (i. e., external validation) - the situation that is reflective of the clinical use of AI models.