no code implementations • 26 Jan 2024 • Nuoyan Zhou, Dawei Zhou, Decheng Liu, Xinbo Gao, Nannan Wang
Deep neural networks are vulnerable to adversarial samples.
1 code implementation • 5 Oct 2023 • Nuoyan Zhou, Nannan Wang, Decheng Liu, Dawei Zhou, Xinbo Gao
Deep neural networks are vulnerable to adversarial noise.
Ranked #1 on Adversarial Attack on CIFAR-10 (Attack: AutoAttack metric)