Search Results for author: Xiaosen Wang

Found 26 papers, 17 papers with code

Bag of Tricks to Boost Adversarial Transferability

no code implementations16 Jan 2024 Zeliang Zhang, Rongyi Zhu, Wei Yao, Xiaosen Wang, Chenliang Xu

In this work, we find that several tiny changes in the existing adversarial attacks can significantly affect the attack performance, \eg, the number of iterations and step size.

Generating Visually Realistic Adversarial Patch

no code implementations5 Dec 2023 Xiaosen Wang, Kunyu Wang

Moreover, the generated adversarial patches can be disguised as the scrawl or logo in the physical world to fool the deep models without being detected, bringing significant threats to DNNs-enabled applications.

Position

MMA-Diffusion: MultiModal Attack on Diffusion Models

2 code implementations29 Nov 2023 Yijun Yang, Ruiyuan Gao, Xiaosen Wang, Tsung-Yi Ho, Nan Xu, Qiang Xu

In recent years, Text-to-Image (T2I) models have seen remarkable advancements, gaining widespread adoption.

Rethinking Mixup for Improving the Adversarial Transferability

no code implementations28 Nov 2023 Xiaosen Wang, Zeyuan Yin

In this work, we posit that the adversarial examples located at the convergence of decision boundaries across various categories exhibit better transferability and identify that Admix tends to steer the adversarial examples towards such regions.

Structure Invariant Transformation for better Adversarial Transferability

2 code implementations ICCV 2023 Xiaosen Wang, Zeliang Zhang, Jianping Zhang

In this work, we find that the existing input transformation based attacks transform the input image globally, resulting in limited diversity of the transformed images.

Adversarial Attack

Improving the Transferability of Adversarial Examples with Arbitrary Style Transfer

2 code implementations21 Aug 2023 Zhijin Ge, Fanhua Shang, Hongying Liu, Yuanyuan Liu, Liang Wan, Wei Feng, Xiaosen Wang

Deep neural networks are vulnerable to adversarial examples crafted by applying human-imperceptible perturbations on clean inputs.

Domain Generalization Style Transfer

Boosting Adversarial Transferability by Block Shuffle and Rotation

2 code implementations20 Aug 2023 Kunyu Wang, Xuanran He, Wenxuan Wang, Xiaosen Wang

In this work, we observe that existing input transformation based attacks, one of the mainstream transfer-based attacks, result in different attention heatmaps on various models, which might limit the transferability.

Rethinking the Backward Propagation for Adversarial Transferability

1 code implementation NeurIPS 2023 Xiaosen Wang, Kangheng Tong, Kun He

input image and loss function so as to generate adversarial examples with higher transferability.

Boosting Adversarial Transferability by Achieving Flat Local Maxima

2 code implementations NeurIPS 2023 Zhijin Ge, Hongying Liu, Xiaosen Wang, Fanhua Shang, Yuanyuan Liu

Extensive experimental results on the ImageNet-compatible dataset show that the proposed method can generate adversarial examples at flat local regions, and significantly improve the adversarial transferability on either normally trained models or adversarially trained models than the state-of-the-art attacks.

Diversifying the High-level Features for better Adversarial Transferability

2 code implementations20 Apr 2023 Zhiyuan Wang, Zeliang Zhang, Siyuan Liang, Xiaosen Wang

Incorporated into the input transformation-based attacks, DHF generates more transferable adversarial examples and outperforms the baselines with a clear margin when attacking several defense models, showing its generalization to various attacks and high effectiveness for boosting transferability.

Vocal Bursts Intensity Prediction

Improving the Transferability of Adversarial Samples by Path-Augmented Method

1 code implementation CVPR 2023 Jianping Zhang, Jen-tse Huang, Wenxuan Wang, Yichen Li, Weibin Wu, Xiaosen Wang, Yuxin Su, Michael R. Lyu

However, such methods selected the image augmentation path heuristically and may augment images that are semantics-inconsistent with the target images, which harms the transferability of the generated adversarial samples.

Image Augmentation

Improving Adversarial Transferability with Scheduled Step Size and Dual Example

no code implementations30 Jan 2023 Zeliang Zhang, Peihan Liu, Xiaosen Wang, Chenliang Xu

Motivated by this finding, we argue that the information of adversarial perturbations near the benign sample, especially the direction, benefits more on the transferability.

Adversarial Attack

Robust Textual Embedding against Word-level Adversarial Attacks

1 code implementation28 Feb 2022 Yichen Yang, Xiaosen Wang, Kun He

We attribute the vulnerability of natural language processing models to the fact that similar inputs are converted to dissimilar representations in the embedding space, leading to inconsistent outputs, and we propose a novel robust training method, termed Fast Triplet Metric Learning (FTML).

Attribute Metric Learning

TextHacker: Learning based Hybrid Local Search Algorithm for Text Hard-label Adversarial Attack

1 code implementation20 Jan 2022 Zhen Yu, Xiaosen Wang, Wanxiang Che, Kun He

Existing textual adversarial attacks usually utilize the gradient or prediction confidence to generate adversarial examples, making it hard to be deployed in real-world applications.

Adversarial Attack Hard-label Attack +3

Triangle Attack: A Query-efficient Decision-based Adversarial Attack

1 code implementation13 Dec 2021 Xiaosen Wang, Zeliang Zhang, Kangheng Tong, Dihong Gong, Kun He, Zhifeng Li, Wei Liu

Decision-based attack poses a severe threat to real-world applications since it regards the target model as a black box and only accesses the hard prediction label.

Adversarial Attack Dimensionality Reduction

I-PGD-AT: Efficient Adversarial Training via Imitating Iterative PGD Attack

no code implementations29 Sep 2021 Xiaosen Wang, Bhavya Kailkhura, Krishnaram Kenthapadi, Bo Li

Finally, to demonstrate the generality of I-PGD-AT, we integrate it into PGD adversarial training and show that it can even further improve the robustness.

Detecting Textual Adversarial Examples through Randomized Substitution and Vote

1 code implementation13 Sep 2021 Xiaosen Wang, Yifeng Xiong, Kun He

Based on this observation, we propose a novel textual adversarial example detection method, termed Randomized Substitution and Vote (RS&V), which votes the prediction label by accumulating the logits of k samples generated by randomly substituting the words in the input text with synonyms.

Multi-stage Optimization based Adversarial Training

no code implementations26 Jun 2021 Xiaosen Wang, Chuanbiao Song, LiWei Wang, Kun He

In this work, we aim to avoid the catastrophic overfitting by introducing multi-step adversarial examples during the single-step adversarial training.

Adversarial Robustness

Enhancing the Transferability of Adversarial Attacks through Variance Tuning

2 code implementations CVPR 2021 Xiaosen Wang, Kun He

Incorporating variance tuning with input transformations on iterative gradient-based attacks in the multi-model setting, the integrated method could achieve an average success rate of 90. 1% against nine advanced defense methods, improving the current best attack performance significantly by 85. 1% .

Boosting Adversarial Transferability through Enhanced Momentum

1 code implementation19 Mar 2021 Xiaosen Wang, Jiadong Lin, Han Hu, Jingdong Wang, Kun He

Various momentum iterative gradient-based methods are shown to be effective to improve the adversarial transferability.

Adversarial Attack

Admix: Enhancing the Transferability of Adversarial Attacks

2 code implementations ICCV 2021 Xiaosen Wang, Xuanran He, Jingdong Wang, Kun He

We investigate in this direction and observe that existing transformations are all applied on a single image, which might limit the adversarial transferability.

AT-GAN: An Adversarial Generative Model for Non-constrained Adversarial Examples

no code implementations1 Jan 2021 Xiaosen Wang, Kun He, Chuanbiao Song, LiWei Wang, John E. Hopcroft

A recent work targets unrestricted adversarial example using generative model but their method is based on a search in the neighborhood of input noise, so actually their output is still constrained by input.

Adversarial Attack Transfer Learning

Adversarial Training with Fast Gradient Projection Method against Synonym Substitution based Text Attacks

1 code implementation9 Aug 2020 Xiaosen Wang, Yichen Yang, Yihe Deng, Kun He

Adversarial training is the most empirically successful approach in improving the robustness of deep neural networks for image classification. For text classification, however, existing synonym substitution based adversarial attacks are effective but not efficient to be incorporated into practical text adversarial training.

Adversarial Attack Image Classification +2

Natural Language Adversarial Defense through Synonym Encoding

1 code implementation15 Sep 2019 Xiaosen Wang, Hao Jin, Yichen Yang, Kun He

In the area of natural language processing, deep learning models are recently known to be vulnerable to various types of adversarial perturbations, but relatively few works are done on the defense side.

Adversarial Attack Adversarial Defense

A New Anchor Word Selection Method for the Separable Topic Discovery

no code implementations10 May 2019 Kun He, Wu Wang, Xiaosen Wang, John E. Hopcroft

In this work, we propose a new method for the anchor word selection by associating the word co-occurrence probability with the words similarity and assuming that the most different words on semantic are potential candidates for the anchor words.

Word Similarity

AT-GAN: An Adversarial Generator Model for Non-constrained Adversarial Examples

no code implementations16 Apr 2019 Xiaosen Wang, Kun He, Chuanbiao Song, Li-Wei Wang, John E. Hopcroft

In this way, AT-GAN can learn the distribution of adversarial examples that is very close to the distribution of real data.

Adversarial Attack

Cannot find the paper you are looking for? You can Submit a new open access paper.