1 code implementation • 1 Jan 2022 • Yexin Duan, Junhua Zou, Xingyu Zhou, Wu Zhang, Jin Zhang, Zhisong Pan
Deep neural networks are vulnerable to adversarial examples, which can fool deep models by adding subtle perturbations.
2 code implementations • ECCV 2020 • Junhua Zou, Zhisong Pan, Junyang Qiu, Xin Liu, Ting Rui, Wei Li
RDIM and region fitting do not require extra running time and these three steps can be well integrated into other attacks.
no code implementations • 1 Sep 2021 • Yexin Duan, Jialin Chen, Xingyu Zhou, Junhua Zou, Zhengyun He, Jin Zhang, Wu Zhang, Zhisong Pan
An adversary can fool deep neural network object detectors by generating adversarial noises.
2 code implementations • 8 Jul 2020 • Junhua Zou, Yexin Duan, Boyu Li, Wu Zhang, Yu Pan, Zhisong Pan
Fast gradient sign attack series are popular methods that are used to generate adversarial examples.