no code implementations • 25 Oct 2022 • Huan Hua, Jun Yan, Xi Fang, Weiquan Huang, Huilin Yin, Wancheng Ge
With the utilization of such a framework, the influence of non-robust features could be mitigated to strengthen the adversarial robustness.
1 code implementation • 8 Jun 2022 • Jun Yan, Huilin Yin, Xiaoyang Deng, Ziming Zhao, Wancheng Ge, Hao Zhang, Gerhard Rigoll
Since adversarial vulnerability can be regarded as a high-frequency phenomenon, it is essential to regulate the adversarially-trained neural network models in the frequency domain.
1 code implementation • 10 Aug 2021 • Jun Yan, Xiaoyang Deng, Huilin Yin, Wancheng Ge
Deep Neural Networks (DNNs) are vulnerable to adversarial examples which would inveigle neural networks to make prediction errors with small perturbations on the input images.