1 code implementation • 5 Dec 2019 • Alexander Turner, Dimitris Tsipras, Aleksander Madry
While such attacks are very effective, they crucially rely on the adversary injecting arbitrary inputs that are---often blatantly---mislabeled.
no code implementations • ICLR 2019 • Alexander Turner, Dimitris Tsipras, Aleksander Madry
Deep neural networks have been recently demonstrated to be vulnerable to backdoor attacks.
7 code implementations • ICLR 2019 • Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, Aleksander Madry
We show that there may exist an inherent tension between the goal of adversarial robustness and that of standard generalization.