Search Results for author: Hengwei Zhang

Found 4 papers, 1 papers with code

Adversarial example soups: averaging multiple adversarial examples improves transferability without increasing additional generation time

no code implementations27 Feb 2024 Bo Yang, Hengwei Zhang, Chenwei Li, Jindong Wang

For transfer-based attacks, the adversarial examples are crafted on the surrogate model, which can be implemented to mislead the target model effectively.

Adversarial example generation with AdaBelief Optimizer and Crop Invariance

no code implementations7 Feb 2021 Bo Yang, Hengwei Zhang, Yuchen Zhang, Kaiyong Xu, Jindong Wang

ABI-FGM and CIM can be readily integrated to build a strong gradient-based attack to further boost the success rates of adversarial examples for black-box attacks.

Random Transformation of Image Brightness for Adversarial Attack

1 code implementation12 Jan 2021 Bo Yang, Kaiyong Xu, Hengjun Wang, Hengwei Zhang

Before deep neural networks are deployed, adversarial attacks can thus be an important method to evaluate and select robust models in safety-critical applications.

Adversarial Attack Image Augmentation

Boosting Adversarial Attacks on Neural Networks with Better Optimizer

no code implementations1 Dec 2020 Heng Yin, Hengwei Zhang, Jindong Wang, Ruiyu Dou

However, the success rate of adversarial attacks can be further improved in black-box environments.

Cannot find the paper you are looking for? You can Submit a new open access paper.