Search Results for author: Alexander Turner

Found 3 papers, 2 papers with code

Label-Consistent Backdoor Attacks

1 code implementation5 Dec 2019 Alexander Turner, Dimitris Tsipras, Aleksander Madry

While such attacks are very effective, they crucially rely on the adversary injecting arbitrary inputs that are---often blatantly---mislabeled.

Clean-Label Backdoor Attacks

no code implementations ICLR 2019 Alexander Turner, Dimitris Tsipras, Aleksander Madry

Deep neural networks have been recently demonstrated to be vulnerable to backdoor attacks.

Robustness May Be at Odds with Accuracy

7 code implementations ICLR 2019 Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, Aleksander Madry

We show that there may exist an inherent tension between the goal of adversarial robustness and that of standard generalization.

Adversarial Robustness

Cannot find the paper you are looking for? You can Submit a new open access paper.