Search Results for author: Anish Athalye

Found 8 papers, 8 papers with code

Pervasive Label Errors in Test Sets Destabilize Machine Learning Benchmarks

2 code implementations26 Mar 2021 Curtis G. Northcutt, Anish Athalye, Jonas Mueller

Errors in test sets are numerous and widespread: we estimate an average of at least 3. 3% errors across the 10 datasets, where for example label errors comprise at least 6% of the ImageNet validation set.

BIG-bench Machine Learning

Evaluating and Understanding the Robustness of Adversarial Logit Pairing

1 code implementation26 Jul 2018 Logan Engstrom, Andrew Ilyas, Anish Athalye

We evaluate the robustness of Adversarial Logit Pairing, a recently proposed defense against adversarial examples.

Adversarial Attack

Black-box Adversarial Attacks with Limited Queries and Information

2 code implementations ICML 2018 Andrew Ilyas, Logan Engstrom, Anish Athalye, Jessy Lin

Current neural network-based classifiers are susceptible to adversarial examples even in the black-box setting, where the attacker only has query access to the model.

On the Robustness of the CVPR 2018 White-Box Adversarial Example Defenses

2 code implementations10 Apr 2018 Anish Athalye, Nicholas Carlini

Neural networks are known to be vulnerable to adversarial examples.

Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples

4 code implementations ICML 2018 Anish Athalye, Nicholas Carlini, David Wagner

We identify obfuscated gradients, a kind of gradient masking, as a phenomenon that leads to a false sense of security in defenses against adversarial examples.

Adversarial Attack Adversarial Defense

Query-Efficient Black-box Adversarial Examples (superceded)

1 code implementation19 Dec 2017 Andrew Ilyas, Logan Engstrom, Anish Athalye, Jessy Lin

Second, we introduce a new algorithm to perform targeted adversarial attacks in the partial-information setting, where the attacker only has access to a limited number of target classes.

Adversarial Attack

Synthesizing Robust Adversarial Examples

3 code implementations24 Jul 2017 Anish Athalye, Logan Engstrom, Andrew Ilyas, Kevin Kwok

We demonstrate the existence of robust 3D adversarial objects, and we present the first algorithm for synthesizing examples that are adversarial over a chosen distribution of transformations.

Cannot find the paper you are looking for? You can Submit a new open access paper.