1 code implementation • 29 Jul 2021 • Juan C. Pérez, Motasem Alfarra, Guillaume Jeanneret, Laura Rueda, Ali Thabet, Bernard Ghanem, Pablo Arbeláez
Deep learning models are prone to being fooled by imperceptible perturbations known as adversarial attacks.