TAN Without a Burn: Scaling Laws of DP-SGD

7 Oct 2022  ·  Tom Sander, Pierre Stock, Alexandre Sablayrolles ·

Differentially Private methods for training Deep Neural Networks (DNNs) have progressed recently, in particular with the use of massive batches and aggregated data augmentations for a large number of training steps. These techniques require much more computing resources than their non-private counterparts, shifting the traditional privacy-accuracy trade-off to a privacy-accuracy-compute trade-off and making hyper-parameter search virtually impossible for realistic scenarios. In this work, we decouple privacy analysis and experimental behavior of noisy training to explore the trade-off with minimal computational requirements. We first use the tools of R\'enyi Differential Privacy (RDP) to highlight that the privacy budget, when not overcharged, only depends on the total amount of noise (TAN) injected throughout training. We then derive scaling laws for training models with DP-SGD to optimize hyper-parameters with more than a $100\times$ reduction in computational budget. We apply the proposed method on CIFAR-10 and ImageNet and, in particular, strongly improve the state-of-the-art on ImageNet with a +9 points gain in top-1 accuracy for a privacy budget epsilon=8.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Classification with Differential Privacy ImageNet NFResnet-50 Top 1 Accuracy 39.2 # 1

Methods


No methods listed for this paper. Add relevant methods here