no code implementations • 17 Oct 2019 • Anindya Sarkar, Nikhil Kumar Gupta, Raghu Iyengar
Recent studies on the adversarial vulnerability of neural networks have shown that models trained with the objective of minimizing an upper bound on the worst-case loss over all possible adversarial perturbations improve robustness against adversarial attacks.