Search Results for author: Xingxing Wei

Found 45 papers, 21 papers with code

Removal and Selection: Improving RGB-Infrared Object Detection via Coarse-to-Fine Fusion

no code implementations19 Jan 2024 Tianyi Zhao, Maoxun Yuan, Xingxing Wei

Specifically, following this perspective, we design a Redundant Spectrum Removal module to coarsely remove interfering information within each modality and a Dynamic Feature Selection module to finely select the desired features for feature fusion.

feature selection Object +3

Embodied Adversarial Attack: A Dynamic Robust Physical Attack in Autonomous Driving

no code implementations15 Dec 2023 Yitong Sun, Yao Huang, Xingxing Wei

As physical adversarial attacks become extensively applied in unearthing the potential risk of security-critical scenarios, especially in autonomous driving, their vulnerability to environmental changes has also been brought to light.

Adversarial Attack Autonomous Driving

Improving Adversarial Robust Fairness via Anti-Bias Soft Label Distillation

no code implementations9 Dec 2023 Shiji Zhao, Xizhe Wang, Xingxing Wei

In this paper, we give an in-depth analysis of the potential factors and argue that the smoothness degree of samples' soft labels for different classes (i. e., hard class or easy class) will affect the robust fairness of DNN models from both empirical observation and theoretical analysis.

Adversarial Robustness Fairness +1

Classification Committee for Active Deep Object Detection

no code implementations16 Aug 2023 Lei Zhao, Bo Li, Xingxing Wei

The role of the classification committee is to select the most informative images according to their uncertainty values from the view of classification, which is expected to focus more on the discrepancy and representative of instances.

Active Learning Classification +3

Unified Adversarial Patch for Visible-Infrared Cross-modal Attacks in the Physical World

1 code implementation27 Jul 2023 Xingxing Wei, Yao Huang, Yitong Sun, Jie Yu

We also demonstrate the effectiveness of our approach in physical-world scenarios under various settings, including different angles, distances, postures, and scenes for both visible and infrared sensors.

Defending Adversarial Patches via Joint Region Localizing and Inpainting

no code implementations26 Jul 2023 Junwen Chen, Xingxing Wei

In this paper, we analyse the properties of adversarial patches, and find that: on the one hand, adversarial patches will lead to the appearance or contextual inconsistency in the target objects; on the other hand, the patch region will show abnormal changes on the high-level feature maps of the objects extracted by a backbone network.

Improving Viewpoint Robustness for Visual Recognition via Adversarial Training

1 code implementation21 Jul 2023 Shouwei Ruan, Yinpeng Dong, Hang Su, Jianteng Peng, Ning Chen, Xingxing Wei

Experimental results show that VIAT significantly improves the viewpoint robustness of various image classifiers based on the diversity of adversarial viewpoints generated by GMVFool.

Towards Viewpoint-Invariant Visual Recognition via Adversarial Training

1 code implementation ICCV 2023 Shouwei Ruan, Yinpeng Dong, Hang Su, Jianteng Peng, Ning Chen, Xingxing Wei

Visual recognition models are not invariant to viewpoint changes in the 3D world, as different viewing directions can dramatically affect the predictions given the same object.

Unified Adversarial Patch for Cross-modal Attacks in the Physical World

1 code implementation ICCV 2023 Xingxing Wei, Yao Huang, Yitong Sun, Jie Yu

To show the potential risks under such scenes, we propose a unified adversarial patch to perform cross-modal physical attacks, i. e., fooling visible and infrared object detectors at the same time via a single patch.

Structured Network Pruning by Measuring Filter-wise Interactions

no code implementations3 Jul 2023 Wenting Tang, Xingxing Wei, Bo Li

Utilizing this new redundancy criterion, we propose a structured network pruning approach SNPFI (Structured Network Pruning by measuring Filter-wise Interaction).

Image Classification Network Pruning

Learning to Pan-sharpening with Memories of Spatial Details

1 code implementation28 Jun 2023 Maoxun Yuan, Tianyi Zhao, Bo Li, Xingxing Wei

To address this issue, in this paper we observe that the spatial details from PAN images are mainly high-frequency cues, i. e., the edges reflect the contour of input PAN images.

Mitigating the Accuracy-Robustness Trade-off via Multi-Teacher Adversarial Distillation

1 code implementation28 Jun 2023 Shiji Zhao, Xizhe Wang, Xingxing Wei

Adversarial training is a practical approach for improving the robustness of deep neural networks against adversarial attacks.

Adversarial Robustness Knowledge Distillation

$\mathbf{C}^2$Former: Calibrated and Complementary Transformer for RGB-Infrared Object Detection

no code implementations28 Jun 2023 Maoxun Yuan, Xingxing Wei

In $\mathrm{C}^2$Former, we design an Inter-modality Cross-Attention (ICA) module to obtain the calibrated and complementary features by learning the cross-attention relationship between the RGB and IR modality.

Object object-detection +1

Boosting Adversarial Transferability with Learnable Patch-wise Masks

1 code implementation28 Jun 2023 Xingxing Wei, Shiji Zhao

The proposed approach is a preprocessing method and can be integrated with existing methods to further boost the transferability.

Distributional Modeling for Location-Aware Adversarial Patches

1 code implementation28 Jun 2023 Xingxing Wei, Shouwei Ruan, Yinpeng Dong, Hang Su

In this paper, we propose the Distribution-Optimized Adversarial Patch (DOPatch), a novel method that optimizes a multimodal distribution of adversarial locations instead of individual ones.

Face Recognition

DIFFender: Diffusion-Based Adversarial Defense against Patch Attacks

no code implementations15 Jun 2023 Caixin Kang, Yinpeng Dong, Zhengyi Wang, Shouwei Ruan, Yubo Chen, Hang Su, Xingxing Wei

In this paper, we propose DIFFender, a novel defense method that leverages a text-guided diffusion model to defend against adversarial patches.

Adversarial Defense Face Recognition +1

Revisiting the Trade-off between Accuracy and Robustness via Weight Distribution of Filters

no code implementations6 Jun 2023 Xingxing Wei, Shiji Zhao

Secondly, based on this observation, we propose a sample-wise dynamic network architecture named Adversarial Weight-Varied Network (AW-Net), which focuses on dealing with clean and adversarial examples with a ``divide and rule" weight strategy.

Improving Fast Adversarial Training with Prior-Guided Knowledge

no code implementations1 Apr 2023 Xiaojun Jia, Yong Zhang, Xingxing Wei, Baoyuan Wu, Ke Ma, Jue Wang, Xiaochun Cao

This initialization is generated by using high-quality adversarial perturbations from the historical training process.

Preventing Unauthorized AI Over-Analysis by Medical Image Adversarial Watermarking

no code implementations17 Mar 2023 Xingxing Wei, Bangzheng Pu, Shiji Zhao, Chen Chi, Huazhu Fu

The advancement of deep learning has facilitated the integration of Artificial Intelligence (AI) into clinical practices, particularly in computer-aided diagnosis.

Diabetic Retinopathy Detection Semantic Segmentation

Simultaneously Optimizing Perturbations and Positions for Black-box Adversarial Patch Attacks

1 code implementation26 Dec 2022 Xingxing Wei, Ying Guo, Jie Yu, Bo Zhang

Extensive experiments are conducted on the Face Recognition (FR) task, and results on four representative FR models show that our method can significantly improve the attack success rate and query efficiency.

Face Recognition Position +2

Visually Adversarial Attacks and Defenses in the Physical World: A Survey

no code implementations3 Nov 2022 Xingxing Wei, Bangzheng Pu, Jiefan Lu, Baoyuan Wu

The current adversarial attacks in computer vision can be divided into digital attacks and physical attacks according to their different attack forms.

Adversarial Robustness

Translation, Scale and Rotation: Cross-Modal Alignment Meets RGB-Infrared Vehicle Detection

no code implementations28 Sep 2022 Maoxun Yuan, Yinyan Wang, Xingxing Wei

Then, we propose a Translation-Scale-Rotation Alignment (TSRA) module to address the problem by calibrating the feature maps from these two modalities.

Crowd Counting Object +5

Prior-Guided Adversarial Initialization for Fast Adversarial Training

1 code implementation18 Jul 2022 Xiaojun Jia, Yong Zhang, Xingxing Wei, Baoyuan Wu, Ke Ma, Jue Wang, Xiaochun Cao

Based on the observation, we propose a prior-guided FGSM initialization method to avoid overfitting after investigating several initialization strategies, improving the quality of the AEs during the whole training process.

Adversarial Attack Adversarial Attack on Video Classification

Enhancing Transferability of Adversarial Examples with Spatial Momentum

no code implementations25 Mar 2022 Guoqiu Wang, Huanqian Yan, Xingxing Wei

For that, we propose a novel method named Spatial Momentum Iterative FGSM attack (SMI-FGSM), which introduces the mechanism of momentum accumulation from temporal domain to spatial domain by considering the context information from different regions within the image.

Adversarial Attack

Parallel Rectangle Flip Attack: A Query-based Black-box Attack against Object Detection

no code implementations ICCV 2021 Siyuan Liang, Baoyuan Wu, Yanbo Fan, Xingxing Wei, Xiaochun Cao

Extensive experiments demonstrate that our method can effectively and efficiently attack various popular object detectors, including anchor-based and anchor-free, and generate transferable adversarial examples.

Autonomous Driving Image Classification +2

Generating Transferable Adversarial Patch by Simultaneously Optimizing its Position and Perturbations

no code implementations29 Sep 2021 Xingxing Wei, Ying Guo, Jie Yu, Huanqian Yan, Bo Zhang

In this paper, we propose a method to simultaneously optimize the position and perturbation to generate transferable adversarial patches, and thus obtain high attack success rates in the black-box setting.

Face Recognition Position

An Effective and Robust Detector for Logo Detection

2 code implementations1 Aug 2021 Xiaojun Jia, Huanqian Yan, Yonglin Wu, Xingxing Wei, Xiaochun Cao, Yong Zhang

Moreover, we have applied the proposed methods to competition ACM MM2021 Robust Logo Detection that is organized by Alibaba on the Tianchi platform and won top 2 in 36489 teams.

Data Augmentation

Generate More Imperceptible Adversarial Examples for Object Detection

no code implementations ICML Workshop AML 2021 Siyuan Liang, Xingxing Wei, Xiaochun Cao

The existing attack methods have the following problems: 1) the training generator takes a long time and is difficult to extend to a large dataset; 2) the excessive destruction of the image features does not improve the black-box attack effect(the generated adversarial examples have poor transferability) and brings about visible perturbations.

Object object-detection +1

Improving Adversarial Transferability with Gradient Refining

1 code implementation11 May 2021 Guoqiu Wang, Huanqian Yan, Ying Guo, Xingxing Wei

To improve the transferability of adversarial examples for the black-box setting, several methods have been proposed, e. g., input diversity, translation-invariant attack, and momentum-based attack.

Adversarial Attack Translation

Adversarial Sticker: A Stealthy Attack Method in the Physical World

1 code implementation14 Apr 2021 Xingxing Wei, Ying Guo, Jie Yu

Unlike the previous adversarial patches by designing perturbations, our method manipulates the sticker's pasting position and rotation angle on the objects to perform physical attacks.

Face Recognition Image Retrieval +4

Automated Model Compression by Jointly Applied Pruning and Quantization

no code implementations12 Nov 2020 Wenting Tang, Xingxing Wei, Bo Li

In the traditional deep compression framework, iteratively performing network pruning and quantization can reduce the model size and computation cost to meet the deployment requirements.

AutoML Model Compression +4

Object Hider: Adversarial Patch Attack Against Object Detectors

1 code implementation28 Oct 2020 Yusheng Zhao, Huanqian Yan, Xingxing Wei

Additionally, we have applied the proposed methods to competition "Adversarial Challenge on Object Detection" that is organized by Alibaba on the Tianchi platform and won top 7 in 1701 teams.

Adversarial Attack Object +2

Efficient Adversarial Attacks for Visual Object Tracking

no code implementations ECCV 2020 Siyuan Liang, Xingxing Wei, Siyuan Yao, Xiaochun Cao

In this paper, we analyze the weakness of object trackers based on the Siamese network and then extend adversarial examples to visual object tracking.

Object Visual Object Tracking +1

Attention: to Better Stand on the Shoulders of Giants

no code implementations27 May 2020 Sha Yuan, Zhou Shao, Yu Zhang, Xingxing Wei, Tong Xiao, Yifan Wang, Jie Tang

In the progress of science, the previously discovered knowledge principally inspires new scientific ideas, and citation is a reasonably good reflection of this cumulative nature of scientific research.

Heuristic Black-box Adversarial Attacks on Video Recognition Models

1 code implementation21 Nov 2019 Zhipeng Wei, Jingjing Chen, Xingxing Wei, Linxi Jiang, Tat-Seng Chua, Fengfeng Zhou, Yu-Gang Jiang

To overcome this challenge, we propose a heuristic black-box attack model that generates adversarial perturbations only on the selected frames and regions.

Adversarial Attack Video Recognition

Identifying and Resisting Adversarial Videos Using Temporal Consistency

no code implementations11 Sep 2019 Xiaojun Jia, Xingxing Wei, Xiaochun Cao

We propose the temporal defense, which reconstructs the polluted frames with their temporally neighbor clean frames, to deal with the adversarial videos with sparse polluted frames.

Video Classification

ComDefend: An Efficient Image Compression Model to Defend Adversarial Examples

1 code implementation CVPR 2019 Xiaojun Jia, Xingxing Wei, Xiaochun Cao, Hassan Foroosh

In other words, ComDefend can transform the adversarial image to its clean version, which is then fed to the trained classifier.

Image Compression

Transferable Adversarial Attacks for Image and Video Object Detection

2 code implementations30 Nov 2018 Xingxing Wei, Siyuan Liang, Ning Chen, Xiaochun Cao

Adversarial examples have been demonstrated to threaten many computer vision tasks including object detection.

Generative Adversarial Network Object +2

Modeling and Predicting Popularity Dynamics via Deep Learning Attention Mechanism

no code implementations6 Nov 2018 Sha Yuan, Yu Zhang, Jie Tang, Hua-Wei Shen, Xingxing Wei

Here we propose a deep learning attention mechanism to model the process through which individual items gain their popularity.

Sparse Adversarial Perturbations for Videos

1 code implementation7 Mar 2018 Xingxing Wei, Jun Zhu, Hang Su

Although adversarial samples of deep neural networks (DNNs) have been intensively studied on static images, their extensions in videos are never explored.

Action Recognition Temporal Action Localization

Cannot find the paper you are looking for? You can Submit a new open access paper.