Image Classification with Differential Privacy
5 papers with code • 1 benchmarks • 1 datasets
Image Classification with Differential Privacy is an improved version of the image classification task whereby the final classification output only describe the patterns of groups within the dataset while withholding information about individuals in the dataset.
Most implemented papers
Toward Training at ImageNet Scale with Differential Privacy
Despite a rich literature on how to train ML models with differential privacy, it remains extremely challenging to train real-life, large neural networks with both reasonable accuracy and privacy.
Unlocking High-Accuracy Differentially Private Image Classification through Scale
Differential Privacy (DP) provides a formal privacy guarantee preventing adversaries with access to a machine learning model from extracting information about individual training points.
SmoothNets: Optimizing CNN architecture design for differentially private deep learning
The arguably most widely employed algorithm to train deep neural networks with Differential Privacy is DPSGD, which requires clipping and noising of per-sample gradients.
TAN Without a Burn: Scaling Laws of DP-SGD
Differentially Private methods for training Deep Neural Networks (DNNs) have progressed recently, in particular with the use of massive batches and aggregated data augmentations for a large number of training steps.
Private, fair and accurate: Training large-scale, privacy-preserving AI models in medical imaging
In this work, we evaluated the effect of privacy-preserving training of AI models for chest radiograph diagnosis regarding accuracy and fairness compared to non-private training.