Implicit Generative Modeling of Random Noise during Training for Adversarial Robustness

We introduce a Noise-based prior Learning (NoL) approach for training neural networks that are intrinsically robust to adversarial attacks. We find that the implicit generative modeling of random noise with the same loss function used during posterior maximization, improves a model's understanding of the data manifold furthering adversarial robustness... (read more)

Results in Papers With Code
(↓ scroll down to see all results)