Search Results for author: Xuemei Zeng

Found 2 papers, 0 papers with code

Analyzing Adversarial Robustness of Deep Neural Networks in Pixel Space: a Semantic Perspective

no code implementations18 Jun 2021 Lina Wang, Xingshu Chen, Yulong Wang, Yawei Yue, Yi Zhu, Xuemei Zeng, Wei Wang

Previous works study the adversarial robustness of image classifiers on image level and use all the pixel information in an image indiscriminately, lacking of exploration of regions with different semantic meanings in the pixel space of an image.

Adversarial Robustness

Improving adversarial robustness of deep neural networks by using semantic information

no code implementations18 Aug 2020 Li-Na Wang, Rui Tang, Yawei Yue, Xingshu Chen, Wei Wang, Yi Zhu, Xuemei Zeng

The vulnerability of deep neural networks (DNNs) to adversarial attack, which is an attack that can mislead state-of-the-art classifiers into making an incorrect classification with high confidence by deliberately perturbing the original inputs, raises concerns about the robustness of DNNs to such attacks.

Adversarial Attack Adversarial Robustness

Cannot find the paper you are looking for? You can Submit a new open access paper.