no code implementations • 27 Sep 2022 • Zhixing Ye, Xinwen Cheng, Xiaolin Huang
Deep Neural Networks (DNNs) are susceptible to elaborately designed perturbations, whether such perturbations are dependent or independent of images.
no code implementations • 31 May 2021 • Zhixing Ye, Shaofei Qin, Sizhe Chen, Xiaolin Huang
As the name suggests, for a natural image, if we add the dominant pattern of a DNN to it, the output of this DNN is determined by the dominant pattern instead of the original image, i. e., DNN's prediction is the same with the dominant pattern's.
no code implementations • 20 Feb 2021 • Sizhe Chen, Qinghua Tao, Zhixing Ye, Xiaolin Huang
Deep neural networks could be fooled by adversarial examples with trivial differences to original samples.
no code implementations • 21 Jan 2020 • Zhixing Ye, Sizhe Chen, Peidong Zhang, Chengjin Sun, Xiaolin Huang
Adversarial attacks have long been developed for revealing the vulnerability of Deep Neural Networks (DNNs) by adding imperceptible perturbations to the input.