no code implementations • 9 Jan 2025 • Shiji Zhao, Ranjie Duan, Fengxiang Wang, Chi Chen, Caixin Kang, Jialing Tao, Yuefeng Chen, Hui Xue, Xingxing Wei
Despite achieving some progress, they have a low attack success rate on commercial closed-source MLLMs.
no code implementations • 4 Dec 2024 • Shouwei Ruan, Hanqing Liu, Yao Huang, Xiaoqi Wang, Caixin Kang, Hang Su, Yinpeng Dong, Xingxing Wei
To systematically evaluate VLMs' robustness to real-world 3D variations, we propose AdvDreamer, the first framework that generates physically reproducible adversarial 3D transformation (Adv-3DT) samples from single-view images.
no code implementations • 3 Dec 2024 • Caixin Kang, Yubo Chen, Shouwei Ruan, Shiji Zhao, Ruochen Zhang, Jiayi Wang, Shan Fu, Xingxing Wei
With the rise of deep learning, facial recognition technology has seen extensive research and rapid development.
1 code implementation • 14 Sep 2024 • Xingxing Wei, Caixin Kang, Yinpeng Dong, Zhengyi Wang, Shouwei Ruan, Yubo Chen, Hang Su
Adversarial patches present significant challenges to the robustness of deep learning models, making the development of effective defenses become critical for real-world applications.
no code implementations • 4 Sep 2024 • Yunfeng Diao, Baiqi Wu, Ruixuan Zhang, Ajian Liu, Xingxing Wei, Meng Wang, He Wang
The transferability of adversarial skeletal sequences enables attacks in real-world HAR scenarios, such as autonomous driving, intelligent surveillance, and human-computer interactions.
no code implementations • 11 Jun 2024 • Yichi Zhang, Yao Huang, Yitong Sun, Chang Liu, Zhe Zhao, Zhengwei Fang, Yifan Wang, Huanran Chen, Xiao Yang, Xingxing Wei, Hang Su, Yinpeng Dong, Jun Zhu
Despite the superior capabilities of Multimodal Large Language Models (MLLMs) across diverse tasks, they still face significant trustworthiness challenges.
no code implementations • 14 May 2024 • Lingdong Kong, Shaoyuan Xie, Hanjiang Hu, Yaru Niu, Wei Tsang Ooi, Benoit R. Cottereau, Lai Xing Ng, Yuexin Ma, Wenwei Zhang, Liang Pan, Kai Chen, Ziwei Liu, Weichao Qiu, Wei zhang, Xu Cao, Hao Lu, Ying-Cong Chen, Caixin Kang, Xinning Zhou, Chengyang Ying, Wentao Shang, Xingxing Wei, Yinpeng Dong, Bo Yang, Shengyin Jiang, Zeliang Ma, Dengyi Ji, Haiwen Li, Xingliang Huang, Yu Tian, Genghua Kou, Fan Jia, Yingfei Liu, Tiancai Wang, Ying Li, Xiaoshuai Hao, Yifan Yang, HUI ZHANG, Mengchuan Wei, Yi Zhou, Haimei Zhao, Jing Zhang, Jinke Li, Xiao He, Xiaoqiang Cheng, Bingyang Zhang, Lirong Zhao, Dianlei Ding, Fangsheng Liu, Yixiang Yan, Hongming Wang, Nanfei Ye, Lun Luo, Yubo Tian, Yiwei Zuo, Zhe Cao, Yi Ren, Yunfan Li, Wenjie Liu, Xun Wu, Yifan Mao, Ming Li, Jian Liu, Jiayang Liu, Zihan Qin, Cunxi Chu, Jialei Xu, Wenbo Zhao, Junjun Jiang, Xianming Liu, Ziyan Wang, Chiwei Li, Shilong Li, Chendong Yuan, Songyue Yang, Wentao Liu, Peng Chen, Bin Zhou, YuBo Wang, Chi Zhang, Jianhang Sun, Hai Chen, Xiao Yang, Lizhong Wang, Dongyi Fu, Yongchun Lin, Huitong Yang, Haoang Li, Yadan Luo, Xianjing Cheng, Yong Xu
In the realm of autonomous driving, robust perception under out-of-distribution conditions is paramount for the safe deployment of vehicles.
1 code implementation • 26 Apr 2024 • Maoxun Yuan, Bo Cui, Tianyi Zhao, Jiayi Wang, Shan Fu, Xingxing Wei
Semantic analysis on visible (RGB) and infrared (IR) images has gained attention for its ability to be more accurate and robust under low-illumination and complex weather conditions.
no code implementations • 18 Apr 2024 • Shouwei Ruan, Yinpeng Dong, Hanqing Liu, Yao Huang, Hang Su, Xingxing Wei
Vision-Language Pre-training (VLP) models like CLIP have achieved remarkable success in computer vision and particularly demonstrated superior robustness to distribution shifts of 2D images.
1 code implementation • 19 Jan 2024 • Tianyi Zhao, Maoxun Yuan, Feng Jiang, Nan Wang, Xingxing Wei
Specifically, following this perspective, we design a Redundant Spectrum Removal module to remove interfering information within each modality coarsely and a Dynamic Feature Selection module to finely select the desired features for feature fusion.
1 code implementation • CVPR 2024 • Yao Huang, Yinpeng Dong, Shouwei Ruan, Xiao Yang, Hang Su, Xingxing Wei
However, the field of transferable targeted 3D adversarial attacks remains vacant.
no code implementations • 15 Dec 2023 • Yitong Sun, Yao Huang, Xingxing Wei
Although methods such as EOT have enhanced the robustness of traditional contact attacks like adversarial patches, they fall short in practicality and concealment within dynamic environments such as traffic scenarios.
1 code implementation • 9 Dec 2023 • Shiji Zhao, Ranjie Duan, Xizhe Wang, Xingxing Wei
In this paper, we give an in-depth analysis of the potential factors and argue that the smoothness degree of samples' soft labels for different classes (i. e., hard class or easy class) will affect the robust fairness of DNNs from both empirical observation and theoretical analysis.
no code implementations • 16 Aug 2023 • Lei Zhao, Bo Li, Xingxing Wei
The role of the classification committee is to select the most informative images according to their uncertainty values from the view of classification, which is expected to focus more on the discrepancy and representative of instances.
1 code implementation • 27 Jul 2023 • Xingxing Wei, Yao Huang, Yitong Sun, Jie Yu
We also demonstrate the effectiveness of our approach in physical-world scenarios under various settings, including different angles, distances, postures, and scenes for both visible and infrared sensors.
no code implementations • 26 Jul 2023 • Junwen Chen, Xingxing Wei
In this paper, we analyse the properties of adversarial patches, and find that: on the one hand, adversarial patches will lead to the appearance or contextual inconsistency in the target objects; on the other hand, the patch region will show abnormal changes on the high-level feature maps of the objects extracted by a backbone network.
1 code implementation • 21 Jul 2023 • Shouwei Ruan, Yinpeng Dong, Hang Su, Jianteng Peng, Ning Chen, Xingxing Wei
Experimental results show that VIAT significantly improves the viewpoint robustness of various image classifiers based on the diversity of adversarial viewpoints generated by GMVFool.
1 code implementation • ICCV 2023 • Shouwei Ruan, Yinpeng Dong, Hang Su, Jianteng Peng, Ning Chen, Xingxing Wei
Visual recognition models are not invariant to viewpoint changes in the 3D world, as different viewing directions can dramatically affect the predictions given the same object.
1 code implementation • ICCV 2023 • Xingxing Wei, Yao Huang, Yitong Sun, Jie Yu
To show the potential risks under such scenes, we propose a unified adversarial patch to perform cross-modal physical attacks, i. e., fooling visible and infrared object detectors at the same time via a single patch.
no code implementations • 3 Jul 2023 • Wenting Tang, Xingxing Wei, Bo Li
Utilizing this new redundancy criterion, we propose a structured network pruning approach SNPFI (Structured Network Pruning by measuring Filter-wise Interaction).
2 code implementations • 28 Jun 2023 • Maoxun Yuan, Xingxing Wei
In $\mathrm{C}^2$Former, we design an Inter-modality Cross-Attention (ICA) module to obtain the calibrated and complementary features by learning the cross-attention relationship between the RGB and IR modality.
1 code implementation • 28 Jun 2023 • Maoxun Yuan, Tianyi Zhao, Bo Li, Xingxing Wei
To address this issue, in this paper we observe that the spatial details from PAN images are mainly high-frequency cues, i. e., the edges reflect the contour of input PAN images.
1 code implementation • 28 Jun 2023 • Shiji Zhao, Xizhe Wang, Xingxing Wei
In this paper, to mitigate the accuracy-robustness trade-off, we introduce the Balanced Multi-Teacher Adversarial Robustness Distillation (B-MTARD) to guide the model's Adversarial Training process by applying a strong clean teacher and a strong robust teacher to handle the clean examples and adversarial examples, respectively.
1 code implementation • 28 Jun 2023 • Xingxing Wei, Shouwei Ruan, Yinpeng Dong, Hang Su
In this paper, we propose the Distribution-Optimized Adversarial Patch (DOPatch), a novel method that optimizes a multimodal distribution of adversarial locations instead of individual ones.
1 code implementation • 28 Jun 2023 • Xingxing Wei, Shiji Zhao
The proposed approach is a preprocessing method and can be integrated with existing methods to further boost the transferability.
1 code implementation • 15 Jun 2023 • Caixin Kang, Yinpeng Dong, Zhengyi Wang, Shouwei Ruan, Yubo Chen, Hang Su, Xingxing Wei
Adversarial attacks, particularly patch attacks, pose significant threats to the robustness and reliability of deep learning models.
1 code implementation • 6 Jun 2023 • Xingxing Wei, Shiji Zhao, Bo Li
Benefiting from the dynamic network architecture, clean and adversarial examples can be processed with different network weights, which provides the potential to enhance both accuracy and adversarial robustness.
1 code implementation • 1 Apr 2023 • Xiaojun Jia, Yong Zhang, Xingxing Wei, Baoyuan Wu, Ke Ma, Jue Wang, Xiaochun Cao
This initialization is generated by using high-quality adversarial perturbations from the historical training process.
no code implementations • 20 Mar 2023 • Yinpeng Dong, Caixin Kang, Jinlai Zhang, Zijian Zhu, Yikai Wang, Xiao Yang, Hang Su, Xingxing Wei, Jun Zhu
3D object detection is an important task in autonomous driving to perceive the surroundings.
no code implementations • 17 Mar 2023 • Xingxing Wei, Bangzheng Pu, Shiji Zhao, Chen Chi, Huazhu Fu
The advancement of deep learning has facilitated the integration of Artificial Intelligence (AI) into clinical practices, particularly in computer-aided diagnosis.
1 code implementation • CVPR 2023 • Yinpeng Dong, Caixin Kang, Jinlai Zhang, Zijian Zhu, Yikai Wang, Xiao Yang, Hang Su, Xingxing Wei, Jun Zhu
3D object detection is an important task in autonomous driving to perceive the surroundings.
1 code implementation • 26 Dec 2022 • Xingxing Wei, Ying Guo, Jie Yu, Bo Zhang
Extensive experiments are conducted on the Face Recognition (FR) task, and results on four representative FR models show that our method can significantly improve the attack success rate and query efficiency.
no code implementations • 3 Nov 2022 • Xingxing Wei, Bangzheng Pu, Jiefan Lu, Baoyuan Wu
The current adversarial attacks in computer vision can be divided into digital attacks and physical attacks according to their different attack forms.
1 code implementation • 8 Oct 2022 • Yinpeng Dong, Shouwei Ruan, Hang Su, Caixin Kang, Xingxing Wei, Jun Zhu
Recent studies have demonstrated that visual recognition models lack robustness to distribution shift.
no code implementations • 28 Sep 2022 • Maoxun Yuan, Yinyan Wang, Xingxing Wei
Then, we propose a Translation-Scale-Rotation Alignment (TSRA) module to address the problem by calibrating the feature maps from these two modalities.
1 code implementation • 18 Jul 2022 • Xiaojun Jia, Yong Zhang, Xingxing Wei, Baoyuan Wu, Ke Ma, Jue Wang, Xiaochun Cao
Based on the observation, we propose a prior-guided FGSM initialization method to avoid overfitting after investigating several initialization strategies, improving the quality of the AEs during the whole training process.
Adversarial Attack
Adversarial Attack on Video Classification
no code implementations • 25 Mar 2022 • Guoqiu Wang, Huanqian Yan, Xingxing Wei
For that, we propose a novel method named Spatial Momentum Iterative FGSM attack (SMI-FGSM), which introduces the mechanism of momentum accumulation from temporal domain to spatial domain by considering the context information from different regions within the image.
no code implementations • ICCV 2021 • Siyuan Liang, Baoyuan Wu, Yanbo Fan, Xingxing Wei, Xiaochun Cao
Extensive experiments demonstrate that our method can effectively and efficiently attack various popular object detectors, including anchor-based and anchor-free, and generate transferable adversarial examples.
no code implementations • 29 Sep 2021 • Xingxing Wei, Ying Guo, Jie Yu, Huanqian Yan, Bo Zhang
In this paper, we propose a method to simultaneously optimize the position and perturbation to generate transferable adversarial patches, and thus obtain high attack success rates in the black-box setting.
2 code implementations • 1 Aug 2021 • Xiaojun Jia, Huanqian Yan, Yonglin Wu, Xingxing Wei, Xiaochun Cao, Yong Zhang
Moreover, we have applied the proposed methods to competition ACM MM2021 Robust Logo Detection that is organized by Alibaba on the Tianchi platform and won top 2 in 36489 teams.
no code implementations • ICML Workshop AML 2021 • Siyuan Liang, Xingxing Wei, Xiaochun Cao
The existing attack methods have the following problems: 1) the training generator takes a long time and is difficult to extend to a large dataset; 2) the excessive destruction of the image features does not improve the black-box attack effect(the generated adversarial examples have poor transferability) and brings about visible perturbations.
1 code implementation • 11 May 2021 • Guoqiu Wang, Huanqian Yan, Ying Guo, Xingxing Wei
To improve the transferability of adversarial examples for the black-box setting, several methods have been proposed, e. g., input diversity, translation-invariant attack, and momentum-based attack.
1 code implementation • 14 Apr 2021 • Xingxing Wei, Ying Guo, Jie Yu
Unlike the previous adversarial patches by designing perturbations, our method manipulates the sticker's pasting position and rotation angle on the objects to perform physical attacks.
no code implementations • 12 Nov 2020 • Wenting Tang, Xingxing Wei, Bo Li
In the traditional deep compression framework, iteratively performing network pruning and quantization can reduce the model size and computation cost to meet the deployment requirements.
1 code implementation • 28 Oct 2020 • Yusheng Zhao, Huanqian Yan, Xingxing Wei
Additionally, we have applied the proposed methods to competition "Adversarial Challenge on Object Detection" that is organized by Alibaba on the Tianchi platform and won top 7 in 1701 teams.
no code implementations • ECCV 2020 • Siyuan Liang, Xingxing Wei, Siyuan Yao, Xiaochun Cao
In this paper, we analyze the weakness of object trackers based on the Siamese network and then extend adversarial examples to visual object tracking.
no code implementations • 27 May 2020 • Sha Yuan, Zhou Shao, Yu Zhang, Xingxing Wei, Tong Xiao, Yifan Wang, Jie Tang
In the progress of science, the previously discovered knowledge principally inspires new scientific ideas, and citation is a reasonably good reflection of this cumulative nature of scientific research.
1 code implementation • 11 Jan 2020 • Xingxing Wei, Huanqian Yan, Bo Li
Adversarial attacks on video recognition models have been explored recently.
1 code implementation • 21 Nov 2019 • Zhipeng Wei, Jingjing Chen, Xingxing Wei, Linxi Jiang, Tat-Seng Chua, Fengfeng Zhou, Yu-Gang Jiang
To overcome this challenge, we propose a heuristic black-box attack model that generates adversarial perturbations only on the selected frames and regions.
no code implementations • 11 Sep 2019 • Xiaojun Jia, Xingxing Wei, Xiaochun Cao
We propose the temporal defense, which reconstructs the polluted frames with their temporally neighbor clean frames, to deal with the adversarial videos with sparse polluted frames.
1 code implementation • CVPR 2019 • Xiaojun Jia, Xingxing Wei, Xiaochun Cao, Hassan Foroosh
In other words, ComDefend can transform the adversarial image to its clean version, which is then fed to the trained classifier.
3 code implementations • 30 Nov 2018 • Xingxing Wei, Siyuan Liang, Ning Chen, Xiaochun Cao
Adversarial examples have been demonstrated to threaten many computer vision tasks including object detection.
no code implementations • 6 Nov 2018 • Sha Yuan, Yu Zhang, Jie Tang, Hua-Wei Shen, Xingxing Wei
Here we propose a deep learning attention mechanism to model the process through which individual items gain their popularity.
3 code implementations • 7 Mar 2018 • Xingxing Wei, Jun Zhu, Hang Su
Although adversarial samples of deep neural networks (DNNs) have been intensively studied on static images, their extensions in videos are never explored.