no code implementations • 12 Dec 2023 • Renyang Liu, Wei Zhou, Xin Jin, Song Gao, Yuanyu Wang, Ruxin Wang
In generating adversarial examples, the conventional black-box attack methods rely on sufficient feedback from the to-be-attacked models by repeatedly querying until the attack is successful, which usually results in thousands of trials during an attack.
no code implementations • 15 Oct 2023 • Renyang Liu, Jinhong Zhang, Haoran Li, Jin Zhang, Yuanyu Wang, Wei Zhou
Extensive studies have demonstrated that deep neural networks (DNNs) are vulnerable to adversarial attacks.