Image Classification with Differential Privacy

5 papers with code • 1 benchmarks • 1 datasets

Image Classification with Differential Privacy is an improved version of the image classification task whereby the final classification output only describe the patterns of groups within the dataset while withholding information about individuals in the dataset.


Most implemented papers

Toward Training at ImageNet Scale with Differential Privacy

google-research/dp-imagenet 28 Jan 2022

Despite a rich literature on how to train ML models with differential privacy, it remains extremely challenging to train real-life, large neural networks with both reasonable accuracy and privacy.

Unlocking High-Accuracy Differentially Private Image Classification through Scale

deepmind/jax_privacy 28 Apr 2022

Differential Privacy (DP) provides a formal privacy guarantee preventing adversaries with access to a machine learning model from extracting information about individual training points.

SmoothNets: Optimizing CNN architecture design for differentially private deep learning

NiWaRe/DPBenchmark 9 May 2022

The arguably most widely employed algorithm to train deep neural networks with Differential Privacy is DPSGD, which requires clipping and noising of per-sample gradients.

TAN Without a Burn: Scaling Laws of DP-SGD

facebookresearch/tan 7 Oct 2022

Differentially Private methods for training Deep Neural Networks (DNNs) have progressed recently, in particular with the use of massive batches and aggregated data augmentations for a large number of training steps.

Private, fair and accurate: Training large-scale, privacy-preserving AI models in medical imaging

tayebiarasteh/dp_cxr 3 Feb 2023

In this work, we evaluated the effect of privacy-preserving training of AI models for chest radiograph diagnosis regarding accuracy and fairness compared to non-private training.