Search Results for author: Wenzhao Xiang

Found 8 papers, 2 papers with code

Improving Model Generalization by On-manifold Adversarial Augmentation in the Frequency Domain

no code implementations28 Feb 2023 Chang Liu, Wenzhao Xiang, Yuan He, Hui Xue, Shibao Zheng, Hang Su

To address this issue, we proposed a novel method of Augmenting data with Adversarial examples via a Wavelet module (AdvWavAug), an on-manifold adversarial data augmentation technique that is simple to implement.

Data Augmentation

A Comprehensive Study on Robustness of Image Classification Models: Benchmarking and Rethinking

no code implementations28 Feb 2023 Chang Liu, Yinpeng Dong, Wenzhao Xiang, Xiao Yang, Hang Su, Jun Zhu, Yuefeng Chen, Yuan He, Hui Xue, Shibao Zheng

In our benchmark, we evaluate the robustness of 55 typical deep learning models on ImageNet with diverse architectures (e. g., CNNs, Transformers) and learning algorithms (e. g., normal supervised training, pre-training, adversarial training) under numerous adversarial attacks and out-of-distribution (OOD) datasets.

Adversarial Robustness Benchmarking +2

Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial Robustness

no code implementations13 Oct 2021 Xiao Yang, Yinpeng Dong, Wenzhao Xiang, Tianyu Pang, Hang Su, Jun Zhu

The vulnerability of deep neural networks to adversarial examples has motivated an increasing number of defense strategies for promoting model robustness.

Adversarial Robustness

You Cannot Easily Catch Me: A Low-Detectable Adversarial Patch for Object Detectors

no code implementations30 Sep 2021 Zijian Zhu, Hang Su, Chang Liu, Wenzhao Xiang, Shibao Zheng

Fortunately, most existing adversarial patches can be outwitted, disabled and rejected by a simple classification network called an adversarial patch detector, which distinguishes adversarial patches from original images.

Self-Driving Cars

Improving the Robustness of Adversarial Attacks Using an Affine-Invariant Gradient Estimator

no code implementations13 Sep 2021 Wenzhao Xiang, Hang Su, Chang Liu, Yandong Guo, Shibao Zheng

As designers of artificial intelligence try to outwit hackers, both sides continue to hone in on AI's inherent vulnerabilities.

Adversarial Attack

Improving Visual Quality of Unrestricted Adversarial Examples with Wavelet-VAE

no code implementations ICML Workshop AML 2021 Wenzhao Xiang, Chang Liu, Shibao Zheng

Traditional adversarial examples are typically generated by adding perturbation noise to the input image within a small matrix norm.

Adversarial Attack

Cannot find the paper you are looking for? You can Submit a new open access paper.