Search Results for author: Wancheng Ge

Found 3 papers, 2 papers with code

Causal Information Bottleneck Boosts Adversarial Robustness of Deep Neural Network

no code implementations25 Oct 2022 Huan Hua, Jun Yan, Xi Fang, Weiquan Huang, Huilin Yin, Wancheng Ge

With the utilization of such a framework, the influence of non-robust features could be mitigated to strengthen the adversarial robustness.

Adversarial Robustness Causal Inference

Wavelet Regularization Benefits Adversarial Training

1 code implementation8 Jun 2022 Jun Yan, Huilin Yin, Xiaoyang Deng, Ziming Zhao, Wancheng Ge, Hao Zhang, Gerhard Rigoll

Since adversarial vulnerability can be regarded as a high-frequency phenomenon, it is essential to regulate the adversarially-trained neural network models in the frequency domain.

Adversarial Robustness

On Procedural Adversarial Noise Attack And Defense

1 code implementation10 Aug 2021 Jun Yan, Xiaoyang Deng, Huilin Yin, Wancheng Ge

Deep Neural Networks (DNNs) are vulnerable to adversarial examples which would inveigle neural networks to make prediction errors with small perturbations on the input images.

Cannot find the paper you are looking for? You can Submit a new open access paper.