Search Results for author: Siyuan Liang

Found 65 papers, 17 papers with code

No Query, No Access

no code implementations12 May 2025 WenQiang Wang, Siyuan Liang, Yangshijie Zhang, Xiaojun Jia, Hao Lin, Xiaochun Cao

To prevent access to the victim model, we create a shadow dataset with publicly available pre-trained models and clustering methods as a foundation for developing substitute models.

Adversarial Attack

Jailbreaking the Text-to-Video Generative Models

no code implementations10 May 2025 Jiayang Liu, Siyuan Liang, Shiqian Zhao, RongCheng Tu, Wenbo Zhou, Xiaochun Cao, DaCheng Tao, Siew Kei Lam

Our approach formulates the prompt generation task as an optimization problem with three key objectives: (1) maximizing the semantic similarity between the input and generated prompts, (2) ensuring that the generated prompts can evade the safety filter of the text-to-video model, and (3) maximizing the semantic similarity between the generated videos and the original input prompts.

Semantic Similarity Semantic Textual Similarity

Natural Reflection Backdoor Attack on Vision Language Model for Autonomous Driving

no code implementations9 May 2025 Ming Liu, Siyuan Liang, Koushik Howlader, LiWen Wang, DaCheng Tao, Wensheng Zhang

Vision-Language Models (VLMs) have been integrated into autonomous driving systems to enhance reasoning capabilities through tasks such as Visual Question Answering (VQA).

Autonomous Driving Backdoor Attack +4

T2VShield: Model-Agnostic Jailbreak Defense for Text-to-Video Models

no code implementations22 Apr 2025 Siyuan Liang, Jiayang Liu, Jiecheng Zhai, Tianmeng Fang, RongCheng Tu, Aishan Liu, Xiaochun Cao, DaCheng Tao

The rapid development of generative artificial intelligence has made text to video models essential for building future multimodal world simulators.

Manipulating Multimodal Agents via Cross-Modal Prompt Injection

no code implementations19 Apr 2025 Le Wang, Zonghao Ying, Tianyuan Zhang, Siyuan Liang, Shengshan Hu, Mingchuan Zhang, Aishan Liu, Xianglong Liu

The emergence of multimodal large language models has redefined the agent paradigm by integrating language and vision modalities with external data sources, enabling agents to better interpret human instructions and execute increasingly complex tasks.

Large Language Model

Less is More: Efficient Black-box Attribution via Minimal Interpretable Subset Selection

2 code implementations1 Apr 2025 Ruoyu Chen, Siyuan Liang, Jingzhi Li, Shiming Liu, Li Liu, Hua Zhang, Xiaochun Cao

Then, efficiently ranking input sub-regions by their importance for attribution, we improve optimization efficiency through a novel bidirectional greedy search algorithm.

Lie Detector: Unified Backdoor Detection via Cross-Examination Framework

no code implementations21 Mar 2025 Xuan Wang, Siyuan Liang, Dongping Liao, Han Fang, Aishan Liu, Xiaochun Cao, Yu-liang Lu, Ee-Chien Chang, Xitong Gao

Institutions with limited data and computing resources often outsource model training to third-party providers in a semi-honest setting, assuming adherence to prescribed training protocols with pre-defined learning paradigm (e. g., supervised or semi-supervised learning).

LEDiT: Your Length-Extrapolatable Diffusion Transformer without Positional Encoding

no code implementations6 Mar 2025 Shen Zhang, Yaning Tan, Siyuan Liang, Linze Li, Ge Wu, Yuhao Chen, Shuheng Li, Zhenyu Zhao, Caihua Chen, Jiajun Liang, Yao Tang

Diffusion transformers(DiTs) struggle to generate images at resolutions higher than their training resolutions.

Adversarial Training for Multimodal Large Language Models against Jailbreak Attacks

no code implementations5 Mar 2025 Liming Lu, Shuchao Pang, Siyuan Liang, Haotian Zhu, Xiyu Zeng, Aishan Liu, Yunhuai Liu, Yongbin Zhou

In this paper, we present the first adversarial training (AT) paradigm tailored to defend against jailbreak attacks during the MLLM training phase.

ELBA-Bench: An Efficient Learning Backdoor Attacks Benchmark for Large Language Models

no code implementations22 Feb 2025 Xuxu Liu, Siyuan Liang, Mengya Han, Yong Luo, Aishan Liu, Xiantao Cai, Zheng He, DaCheng Tao

Generative large language models are crucial in natural language processing, but they are vulnerable to backdoor attacks, where subtle triggers compromise their behavior.

Backdoor Attack In-Context Learning +1

Reasoning-Augmented Conversation for Multi-Turn Jailbreak Attacks on Large Language Models

1 code implementation16 Feb 2025 Zonghao Ying, Deyue Zhang, Zonglei Jing, Yisong Xiao, Quanchen Zou, Aishan Liu, Siyuan Liang, Xiangzheng Zhang, Xianglong Liu, DaCheng Tao

Multi-turn jailbreak attacks simulate real-world human interactions by engaging large language models (LLMs) in iterative dialogues, exposing critical safety vulnerabilities.

Safety Alignment

Black-Box Adversarial Attack on Vision Language Models for Autonomous Driving

no code implementations23 Jan 2025 Lu Wang, Tianyuan Zhang, Yang Qu, Siyuan Liang, Yuwei Chen, Aishan Liu, Xianglong Liu, DaCheng Tao

We identify two key challenges for achieving effective black-box attacks in this context: the effectiveness across driving reasoning chains in AD systems and the dynamic nature of driving scenarios.

Adversarial Attack Autonomous Driving

CogMorph: Cognitive Morphing Attacks for Text-to-Image Models

no code implementations21 Jan 2025 Zonglei Jing, Zonghao Ying, Le Wang, Siyuan Liang, Aishan Liu, Xianglong Liu, DaCheng Tao

The development of text-to-image (T2I) generative models, that enable the creation of high-quality synthetic images from textual prompts, has opened new frontiers in creative design and content generation.

Red Pill and Blue Pill: Controllable Website Fingerprinting Defense via Dynamic Backdoor Learning

no code implementations16 Dec 2024 Siyuan Liang, Jiajun Gong, Tianmeng Fang, Aishan Liu, Tao Wang, Xianglong Liu, Xiaochun Cao, DaCheng Tao, Chang Ee-Chien

CWFD exploits backdoor vulnerabilities in neural networks to directly control the attacker's model by designing trigger patterns based on network traffic.

Website Fingerprinting Defense

CopyrightShield: Spatial Similarity Guided Backdoor Defense against Copyright Infringement in Diffusion Models

no code implementations2 Dec 2024 Zhixiang Guo, Siyuan Liang, Aishan Liu, DaCheng Tao

The diffusion model has gained significant attention due to its remarkable data generation ability in fields such as image synthesis.

backdoor defense Image Generation +1

Interpreting Object-level Foundation Models via Visual Precision Search

2 code implementations25 Nov 2024 Ruoyu Chen, Siyuan Liang, Jingzhi Li, Shiming Liu, Maosen Li, Zheng Huang, Hua Zhang, Xiaochun Cao

Advances in multimodal pre-training have propelled object-level foundation models, such as Grounding DINO and Florence-2, in tasks like visual grounding and object detection.

Explainable Artificial Intelligence (XAI) Object +3

NoVo: Norm Voting off Hallucinations with Attention Heads in Large Language Models

no code implementations11 Oct 2024 Zheng Yi Ho, Siyuan Liang, Sen Zhang, Yibing Zhan, DaCheng Tao

NoVo demonstrates exceptional generalization to 20 diverse datasets, with significant gains in over 90\% of them, far exceeding all current representation editing and reading methods.

Multiple-choice TruthfulQA

Patch is Enough: Naturalistic Adversarial Patch against Vision-Language Pre-training Models

no code implementations7 Oct 2024 Dehong Kong, Siyuan Liang, Xiaopeng Zhu, Yuansheng Zhong, Wenqi Ren

Visual language pre-training (VLP) models have demonstrated significant success across various domains, yet they remain vulnerable to adversarial attacks.

Image to text

CleanerCLIP: Fine-grained Counterfactual Semantic Augmentation for Backdoor Defense in Contrastive Learning

no code implementations26 Sep 2024 Yuan Xun, Siyuan Liang, Xiaojun Jia, Xinwei Liu, Xiaochun Cao

However, in the unsupervised and semi-supervised domain, we find that when CLIP faces some complex attack techniques, the existing fine-tuning defense strategy, CleanCLIP, has some limitations on defense performance.

backdoor defense Contrastive Learning +3

Towards Robust Object Detection: Identifying and Removing Backdoors via Module Inconsistency Analysis

no code implementations24 Sep 2024 Xianda Zhang, Siyuan Liang

Object detection models, widely used in security-critical applications, are vulnerable to backdoor attacks that cause targeted misclassifications when triggered by specific patterns.

backdoor defense Object +3

Adversarial Backdoor Defense in CLIP

no code implementations24 Sep 2024 Junhao Kuang, Siyuan Liang, Jiawei Liang, Kuanrong Liu, Xiaochun Cao

Observations reveal that adversarial examples and backdoor samples exhibit similarities in the feature space within the compromised models.

backdoor defense Data Augmentation

Module-wise Adaptive Adversarial Training for End-to-end Autonomous Driving

no code implementations11 Sep 2024 Tianyuan Zhang, Lu Wang, Jiaqi Kang, Xinwei Zhang, Siyuan Liang, Yuwei Chen, Aishan Liu, Xianglong Liu

Recent advances in deep learning have markedly improved autonomous driving (AD) models, particularly end-to-end systems that integrate perception, prediction, and planning stages, achieving state-of-the-art performance.

Autonomous Driving

Compromising Embodied Agents with Contextual Backdoor Attacks

no code implementations6 Aug 2024 Aishan Liu, Yuguang Zhou, Xianglong Liu, Tianyuan Zhang, Siyuan Liang, Jiakai Wang, Yanjun Pu, Tianlin Li, Junqi Zhang, Wenbo Zhou, Qing Guo, DaCheng Tao

To enable context-dependent behaviors in downstream agents, we implement a dual-modality activation strategy that controls both the generation and execution of program defects through textual and visual triggers.

Autonomous Driving Robot Manipulation +1

GenderBias-\emph{VL}: Benchmarking Gender Bias in Vision Language Models via Counterfactual Probing

no code implementations30 Jun 2024 Yisong Xiao, Aishan Liu, QianJia Cheng, Zhenfei Yin, Siyuan Liang, Jiapeng Li, Jing Shao, Xianglong Liu, DaCheng Tao

For the first time, this paper introduces the GenderBias-\emph{VL} benchmark to evaluate occupation-related gender bias in LVLMs using counterfactual visual questions under individual fairness criteria.

Benchmarking counterfactual +3

Revisiting Backdoor Attacks against Large Vision-Language Models from Domain Shift

no code implementations27 Jun 2024 Siyuan Liang, Jiawei Liang, Tianyu Pang, Chao Du, Aishan Liu, Mingli Zhu, Xiaochun Cao, DaCheng Tao

Instruction tuning enhances large vision-language models (LVLMs) but increases their vulnerability to backdoor attacks due to their open design.

Backdoor Attack Domain Generalization

Jailbreak Vision Language Models via Bi-Modal Adversarial Prompt

1 code implementation6 Jun 2024 Zonghao Ying, Aishan Liu, Tianyuan Zhang, Zhengmin Yu, Siyuan Liang, Xianglong Liu, DaCheng Tao

To address this limitation, this paper introduces the Bi-Modal Adversarial Prompt Attack (BAP), which executes jailbreaks by optimizing textual and visual prompts cohesively.

Language Modelling Large Language Model +1

LanEvil: Benchmarking the Robustness of Lane Detection to Environmental Illusions

no code implementations3 Jun 2024 Tianyuan Zhang, Lu Wang, Hainan Li, Yisong Xiao, Siyuan Liang, Aishan Liu, Xianglong Liu, DaCheng Tao

For the first time, this paper studies the potential threats caused by these environmental illusions to LD and establishes the first comprehensive benchmark LanEvil for evaluating the robustness of LD against this natural corruption.

Autonomous Driving Benchmarking +1

Correlation Matching Transformation Transformers for UHD Image Restoration

1 code implementation2 Jun 2024 Cong Wang, Jinshan Pan, Wei Wang, Gang Fu, Siyuan Liang, Mengzhu Wang, Xiao-Ming Wu, Jun Liu

To better improve feature representation in low-resolution space, we propose to build feature transformation from the high-resolution space to the low-resolution one.

Deblurring Image Deblurring +3

Breaking the False Sense of Security in Backdoor Defense through Re-Activation Attack

no code implementations25 May 2024 Mingli Zhu, Siyuan Liang, Baoyuan Wu

Surprisingly, we find that the original backdoors still exist in defense models derived from existing post-training defense strategies, and the backdoor existence is measured by a novel metric called backdoor existence coefficient.

Adversarial Attack backdoor defense +2

Environmental Matching Attack Against Unmanned Aerial Vehicles Object Detection

no code implementations13 May 2024 Dehong Kong, Siyuan Liang, Wenqi Ren

To the best of our knowledge, this paper is the first to consider natural patches in the domain of UAVs.

object-detection Object Detection

Towards Robust Physical-world Backdoor Attacks on Lane Detection

no code implementations9 May 2024 Xinwei Zhang, Aishan Liu, Tianyuan Zhang, Siyuan Liang, Xianglong Liu

Existing backdoor attack methods on LD exhibit limited effectiveness in dynamic real-world scenarios, primarily because they fail to consider dynamic scene factors, including changes in driving perspectives (e. g., viewpoint transformations) and environmental conditions (e. g., weather or lighting changes).

Autonomous Driving Backdoor Attack +2

Object Detectors in the Open Environment: Challenges, Solutions, and Outlook

1 code implementation24 Mar 2024 Siyuan Liang, Wei Wang, Ruoyu Chen, Aishan Liu, Boxi Wu, Ee-Chien Chang, Xiaochun Cao, DaCheng Tao

This paper aims to bridge this gap by conducting a comprehensive review and analysis of object detectors in open environments.

Incremental Learning Object

Unlearning Backdoor Threats: Enhancing Backdoor Defense in Multimodal Contrastive Learning via Local Token Unlearning

no code implementations24 Mar 2024 Siyuan Liang, Kuanrong Liu, Jiajun Gong, Jiawei Liang, Yuan Xun, Ee-Chien Chang, Xiaochun Cao

In this paper, we explore the possibility of a less-cost defense from the perspective of model unlearning, that is, whether the model can be made to quickly \textbf{u}nlearn \textbf{b}ackdoor \textbf{t}hreats (UBT) by constructing a small set of poisoned samples.

backdoor defense Contrastive Learning

Hide in Thicket: Generating Imperceptible and Rational Adversarial Perturbations on 3D Point Clouds

1 code implementation CVPR 2024 Tianrui Lou, Xiaojun Jia, Jindong Gu, Li Liu, Siyuan Liang, Bangyan He, Xiaochun Cao

We find that concealing deformation perturbations in areas insensitive to human eyes can achieve a better trade-off between imperceptibility and adversarial strength, specifically in parts of the object surface that are complex and exhibit drastic curvature changes.

3D Point Cloud Classification Adversarial Attack +1

Semantic Mirror Jailbreak: Genetic Algorithm Based Jailbreak Prompts Against Open-source LLMs

no code implementations21 Feb 2024 Xiaoxia Li, Siyuan Liang, Jiyi Zhang, Han Fang, Aishan Liu, Ee-Chien Chang

Large Language Models (LLMs), used in creative writing, code generation, and translation, generate text based on input sequences but are vulnerable to jailbreak attacks, where crafted prompts induce harmful outputs.

Code Generation Semantic Similarity +1

Poisoned Forgery Face: Towards Backdoor Attacks on Face Forgery Detection

1 code implementation18 Feb 2024 Jiawei Liang, Siyuan Liang, Aishan Liu, Xiaojun Jia, Junhao Kuang, Xiaochun Cao

However, this paper introduces a novel and previously unrecognized threat in face forgery detection scenarios caused by backdoor attack.

Backdoor Attack

Less is More: Fewer Interpretable Region via Submodular Subset Selection

1 code implementation14 Feb 2024 Ruoyu Chen, Hua Zhang, Siyuan Liang, Jingzhi Li, Xiaochun Cao

For incorrectly predicted samples, our method achieves gains of 81. 0% and 18. 4% compared to the HSIC-Attribution algorithm in the average highest confidence and Insertion score respectively.

Error Understanding Image Attribution +1

Does Few-shot Learning Suffer from Backdoor Attacks?

no code implementations31 Dec 2023 Xinwei Liu, Xiaojun Jia, Jindong Gu, Yuan Xun, Siyuan Liang, Xiaochun Cao

However, in this paper, we propose the Few-shot Learning Backdoor Attack (FLBA) to show that FSL can still be vulnerable to backdoor attacks.

Backdoor Attack Few-Shot Learning

BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive Learning

1 code implementation CVPR 2024 Siyuan Liang, Mingli Zhu, Aishan Liu, Baoyuan Wu, Xiaochun Cao, Ee-Chien Chang

This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses and introduces the \emph{\toolns} attack, which is resistant to backdoor detection and model fine-tuning defenses.

Backdoor Attack Contrastive Learning

Improving Adversarial Transferability by Stable Diffusion

no code implementations18 Nov 2023 Jiayang Liu, Siyu Zhu, Siyuan Liang, Jie Zhang, Han Fang, Weiming Zhang, Ee-Chien Chang

Various techniques have emerged to enhance the transferability of adversarial attacks for the black-box scenario.

Face Encryption via Frequency-Restricted Identity-Agnostic Attacks

no code implementations11 Aug 2023 Xin Dong, Rui Wang, Siyuan Liang, Aishan Liu, Lihua Jing

As for the weak black-box scenario feasibility, we obverse that representations of the average feature in multiple face recognition models are similar, thus we propose to utilize the average feature via the crawled dataset from the Internet as the target to guide the generation, which is also agnostic to identities of unknown face recognition systems; in nature, the low-frequency perturbations are more visually perceptible by the human vision system.

Face Recognition

Isolation and Induction: Training Robust Deep Neural Networks against Model Stealing Attacks

1 code implementation2 Aug 2023 Jun Guo, Aishan Liu, Xingyu Zheng, Siyuan Liang, Yisong Xiao, Yichao Wu, Xianglong Liu

However, these defenses are now suffering problems of high inference computational overheads and unfavorable trade-offs between benign accuracy and stealing robustness, which challenges the feasibility of deployed models in practice.

Privacy-Enhancing Face Obfuscation Guided by Semantic-Aware Attribution Maps

no code implementations IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY 2023 Jingzhi Li, Hua Zhang, Siyuan Liang, Pengwen Dai, Xiaochun Cao

Within this module, we introduce a pixel importance estimation model based on Shapley value to obtain a pixel-level attribution map, and then each pixel on the attribution map is aggregated into semantic facial parts, which are used to quantify the importance of different facial parts.

Face Recognition

Diversifying the High-level Features for better Adversarial Transferability

2 code implementations20 Apr 2023 Zhiyuan Wang, Zeliang Zhang, Siyuan Liang, Xiaosen Wang

Incorporated into the input transformation-based attacks, DHF generates more transferable adversarial examples and outperforms the baselines with a clear margin when attacking several defense models, showing its generalization to various attacks and high effectiveness for boosting transferability.

Vocal Bursts Intensity Prediction

X-Adv: Physical Adversarial Object Attacks against X-ray Prohibited Item Detection

1 code implementation19 Feb 2023 Aishan Liu, Jun Guo, Jiakai Wang, Siyuan Liang, Renshuai Tao, Wenbo Zhou, Cong Liu, Xianglong Liu, DaCheng Tao

In this paper, we take the first step toward the study of adversarial attacks targeted at X-ray prohibited item detection, and reveal the serious threats posed by such attacks in this safety-critical scenario.

Adversarial Attack

Exploring the Relationship Between Architectural Design and Adversarially Robust Generalization

no code implementations CVPR 2023 Aishan Liu, Shiyu Tang, Siyuan Liang, Ruihao Gong, Boxi Wu, Xianglong Liu, DaCheng Tao

In particular, we comprehensively evaluated 20 most representative adversarially trained architectures on ImageNette and CIFAR-10 datasets towards multiple l_p-norm adversarial attacks.

Learning to Optimize Permutation Flow Shop Scheduling via Graph-based Imitation Learning

1 code implementation31 Oct 2022 Longkang Li, Siyuan Liang, Zihao Zhu, Chris Ding, Hongyuan Zha, Baoyuan Wu

Compared to the state-of-the-art reinforcement learning method, our model's network parameters are reduced to only 37\% of theirs, and the solution gap of our model towards the expert solutions decreases from 6. 8\% to 1. 3\% on average.

Computational Efficiency Imitation Learning +4

SimpleDG: Simple Domain Generalization Baseline without Bells and Whistles

1 code implementation26 Oct 2022 Zhi Lv, Bo Lin, Siyuan Liang, Lihua Wang, Mochen Yu, Yao Tang, Jiajun Liang

We present a simple domain generalization baseline, which wins second place in both the common context generalization track and the hybrid context generalization track respectively in NICO CHALLENGE 2022.

Domain Generalization

Exploring the Relationship between Architecture and Adversarially Robust Generalization

no code implementations28 Sep 2022 Aishan Liu, Shiyu Tang, Siyuan Liang, Ruihao Gong, Boxi Wu, Xianglong Liu, DaCheng Tao

Inparticular, we comprehensively evaluated 20 most representative adversarially trained architectures on ImageNette and CIFAR-10 datasets towards multiple `p-norm adversarial attacks.

Exploring Inconsistent Knowledge Distillation for Object Detection with Data Augmentation

1 code implementation20 Sep 2022 Jiawei Liang, Siyuan Liang, Aishan Liu, Ke Ma, Jingzhi Li, Xiaochun Cao

Specifically, we propose a sample-specific data augmentation to transfer the teacher model's ability in capturing distinct frequency components and suggest an adversarial feature augmentation to extract the teacher model's perceptions of non-robust features in the data.

Data Augmentation Knowledge Distillation +2

A Large-scale Multiple-objective Method for Black-box Attack against Object Detection

no code implementations16 Sep 2022 Siyuan Liang, Longkang Li, Yanbo Fan, Xiaojun Jia, Jingzhi Li, Baoyuan Wu, Xiaochun Cao

Recent studies have shown that detectors based on deep models are vulnerable to adversarial examples, even in the black-box scenario where the attacker cannot access the model information.

object-detection Object Detection

Improving Robust Fairness via Balance Adversarial Training

no code implementations15 Sep 2022 ChunYu Sun, Chenye Xu, Chengyuan Yao, Siyuan Liang, Yichao Wu, Ding Liang, Xianglong Liu, Aishan Liu

Adversarial training (AT) methods are effective against adversarial attacks, yet they introduce severe disparity of accuracy and robustness between different classes, known as the robust fairness problem.

Fairness

Universal Backdoor Attacks Detection via Adaptive Adversarial Probe

no code implementations12 Sep 2022 Yuhang Wang, Huafeng Shi, Rui Min, Ruijia Wu, Siyuan Liang, Yichao Wu, Ding Liang, Aishan Liu

Most detection methods are designed to verify whether a model is infected with presumed types of backdoor attacks, yet the adversary is likely to generate diverse backdoor attacks in practice that are unforeseen to defenders, which challenge current detection strategies.

Scheduling

Edge YOLO: Real-Time Intelligent Object Detection System Based on Edge-Cloud Cooperation in Autonomous Vehicles

no code implementations30 May 2022 Siyuan Liang, Hao Wu

Driven by the ever-increasing requirements of autonomous vehicles, such as traffic monitoring and driving assistant, deep learning-based object detection (DL-OD) has been increasingly attractive in intelligent transportation systems.

Autonomous Driving Cloud Computing +2

Fast Transient Stability Prediction Using Grid-informed Temporal and Topological Embedding Deep Neural Network

no code implementations23 Jan 2022 Peiyuan Sun, Long Huo, Siyuan Liang, Xin Chen

Transient stability prediction is critically essential to the fast online assessment and maintaining the stable operation in power systems.

Time Series Time Series Analysis

Parallel Rectangle Flip Attack: A Query-based Black-box Attack against Object Detection

no code implementations ICCV 2021 Siyuan Liang, Baoyuan Wu, Yanbo Fan, Xingxing Wei, Xiaochun Cao

Extensive experiments demonstrate that our method can effectively and efficiently attack various popular object detectors, including anchor-based and anchor-free, and generate transferable adversarial examples.

Autonomous Driving Image Classification +2

Generate More Imperceptible Adversarial Examples for Object Detection

no code implementations ICML Workshop AML 2021 Siyuan Liang, Xingxing Wei, Xiaochun Cao

The existing attack methods have the following problems: 1) the training generator takes a long time and is difficult to extend to a large dataset; 2) the excessive destruction of the image features does not improve the black-box attack effect(the generated adversarial examples have poor transferability) and brings about visible perturbations.

Object object-detection +1

Efficient Adversarial Attacks for Visual Object Tracking

no code implementations ECCV 2020 Siyuan Liang, Xingxing Wei, Siyuan Yao, Xiaochun Cao

In this paper, we analyze the weakness of object trackers based on the Siamese network and then extend adversarial examples to visual object tracking.

Object Visual Object Tracking +1

Transferable Adversarial Attacks for Image and Video Object Detection

3 code implementations30 Nov 2018 Xingxing Wei, Siyuan Liang, Ning Chen, Xiaochun Cao

Adversarial examples have been demonstrated to threaten many computer vision tasks including object detection.

Generative Adversarial Network Object +2

Cannot find the paper you are looking for? You can Submit a new open access paper.