no code implementations • 24 Mar 2024 • Siyuan Liang, Kuanrong Liu, Jiajun Gong, Jiawei Liang, Yuan Xun, Ee-Chien Chang, Xiaochun Cao
In this paper, we explore the possibility of a less-cost defense from the perspective of model unlearning, that is, whether the model can be made to quickly \textbf{u}nlearn \textbf{b}ackdoor \textbf{t}hreats (UBT) by constructing a small set of poisoned samples.
1 code implementation • 24 Mar 2024 • Siyuan Liang, Wei Wang, Ruoyu Chen, Aishan Liu, Boxi Wu, Ee-Chien Chang, Xiaochun Cao, DaCheng Tao
This paper aims to bridge this gap by conducting a comprehensive review and analysis of object detectors in open environments.
1 code implementation • 8 Mar 2024 • Tianrui Lou, Xiaojun Jia, Jindong Gu, Li Liu, Siyuan Liang, Bangyan He, Xiaochun Cao
We find that concealing deformation perturbations in areas insensitive to human eyes can achieve a better trade-off between imperceptibility and adversarial strength, specifically in parts of the object surface that are complex and exhibit drastic curvature changes.
no code implementations • 21 Feb 2024 • Jiawei Liang, Siyuan Liang, Man Luo, Aishan Liu, Dongchen Han, Ee-Chien Chang, Xiaochun Cao
Nevertheless, the frozen visual encoder in autoregressive VLMs imposes constraints on the learning of conventional image triggers.
no code implementations • 21 Feb 2024 • Xiaoxia Li, Siyuan Liang, Jiyi Zhang, Han Fang, Aishan Liu, Ee-Chien Chang
Large Language Models (LLMs), used in creative writing, code generation, and translation, generate text based on input sequences but are vulnerable to jailbreak attacks, where crafted prompts induce harmful outputs.
1 code implementation • 18 Feb 2024 • Jiawei Liang, Siyuan Liang, Aishan Liu, Xiaojun Jia, Junhao Kuang, Xiaochun Cao
However, this paper introduces a novel and previously unrecognized threat in face forgery detection scenarios caused by backdoor attack.
1 code implementation • 14 Feb 2024 • Ruoyu Chen, Hua Zhang, Siyuan Liang, Jingzhi Li, Xiaochun Cao
For incorrectly predicted samples, our method achieves gains of 81. 0% and 18. 4% compared to the HSIC-Attribution algorithm in the average highest confidence and Insertion score respectively.
no code implementations • 31 Dec 2023 • Xinwei Liu, Xiaojun Jia, Jindong Gu, Yuan Xun, Siyuan Liang, Xiaochun Cao
However, in this paper, we propose the Few-shot Learning Backdoor Attack (FLBA) to show that FSL can still be vulnerable to backdoor attacks.
no code implementations • 23 Dec 2023 • Aishan Liu, Xinwei Zhang, Yisong Xiao, Yuguang Zhou, Siyuan Liang, Jiakai Wang, Xianglong Liu, Xiaochun Cao, DaCheng Tao
This paper aims to raise awareness of the potential threats associated with applying PVMs in practical scenarios.
no code implementations • 8 Dec 2023 • Bangyan He, Xiaojun Jia, Siyuan Liang, Tianrui Lou, Yang Liu, Xiaochun Cao
Current Visual-Language Pre-training (VLP) models are vulnerable to adversarial examples.
no code implementations • 20 Nov 2023 • Siyuan Liang, Mingli Zhu, Aishan Liu, Baoyuan Wu, Xiaochun Cao, Ee-Chien Chang
This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses and introduces the \emph{\toolns} attack, which is resistant to backdoor detection and model fine-tuning defenses.
no code implementations • 18 Nov 2023 • Jiayang Liu, Siyu Zhu, Siyuan Liang, Jie Zhang, Han Fang, Weiming Zhang, Ee-Chien Chang
Various techniques have emerged to enhance the transferability of adversarial attacks for the black-box scenario.
no code implementations • 11 Aug 2023 • Xin Dong, Rui Wang, Siyuan Liang, Aishan Liu, Lihua Jing
As for the weak black-box scenario feasibility, we obverse that representations of the average feature in multiple face recognition models are similar, thus we propose to utilize the average feature via the crawled dataset from the Internet as the target to guide the generation, which is also agnostic to identities of unknown face recognition systems; in nature, the low-frequency perturbations are more visually perceptible by the human vision system.
1 code implementation • 2 Aug 2023 • Jun Guo, Aishan Liu, Xingyu Zheng, Siyuan Liang, Yisong Xiao, Yichao Wu, Xianglong Liu
However, these defenses are now suffering problems of high inference computational overheads and unfavorable trade-offs between benign accuracy and stealing robustness, which challenges the feasibility of deployed models in practice.
no code implementations • IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY 2023 • Jingzhi Li, Hua Zhang, Siyuan Liang, Pengwen Dai, Xiaochun Cao
Within this module, we introduce a pixel importance estimation model based on Shapley value to obtain a pixel-level attribution map, and then each pixel on the attribution map is aggregated into semantic facial parts, which are used to quantify the importance of different facial parts.
2 code implementations • 20 Apr 2023 • Zhiyuan Wang, Zeliang Zhang, Siyuan Liang, Xiaosen Wang
Incorporated into the input transformation-based attacks, DHF generates more transferable adversarial examples and outperforms the baselines with a clear margin when attacking several defense models, showing its generalization to various attacks and high effectiveness for boosting transferability.
1 code implementation • 19 Feb 2023 • Aishan Liu, Jun Guo, Jiakai Wang, Siyuan Liang, Renshuai Tao, Wenbo Zhou, Cong Liu, Xianglong Liu, DaCheng Tao
In this paper, we take the first step toward the study of adversarial attacks targeted at X-ray prohibited item detection, and reveal the serious threats posed by such attacks in this safety-critical scenario.
no code implementations • 23 Jan 2023 • Siyuan Liang, Long Huo, Xin Chen, Peiyuan Sun
Wide-area damping control for inter-area oscillation (IAO) is critical to modern power systems.
no code implementations • CVPR 2023 • Aishan Liu, Shiyu Tang, Siyuan Liang, Ruihao Gong, Boxi Wu, Xianglong Liu, DaCheng Tao
In particular, we comprehensively evaluated 20 most representative adversarially trained architectures on ImageNette and CIFAR-10 datasets towards multiple l_p-norm adversarial attacks.
1 code implementation • 31 Oct 2022 • Longkang Li, Siyuan Liang, Zihao Zhu, Chris Ding, Hongyuan Zha, Baoyuan Wu
Compared to the state-of-the-art reinforcement learning method, our model's network parameters are reduced to only 37\% of theirs, and the solution gap of our model towards the expert solutions decreases from 6. 8\% to 1. 3\% on average.
1 code implementation • 26 Oct 2022 • Zhi Lv, Bo Lin, Siyuan Liang, Lihua Wang, Mochen Yu, Yao Tang, Jiajun Liang
We present a simple domain generalization baseline, which wins second place in both the common context generalization track and the hybrid context generalization track respectively in NICO CHALLENGE 2022.
no code implementations • 28 Sep 2022 • Aishan Liu, Shiyu Tang, Siyuan Liang, Ruihao Gong, Boxi Wu, Xianglong Liu, DaCheng Tao
Inparticular, we comprehensively evaluated 20 most representative adversarially trained architectures on ImageNette and CIFAR-10 datasets towards multiple `p-norm adversarial attacks.
1 code implementation • 20 Sep 2022 • Jiawei Liang, Siyuan Liang, Aishan Liu, Ke Ma, Jingzhi Li, Xiaochun Cao
Specifically, we propose a sample-specific data augmentation to transfer the teacher model's ability in capturing distinct frequency components and suggest an adversarial feature augmentation to extract the teacher model's perceptions of non-robust features in the data.
no code implementations • 16 Sep 2022 • Siyuan Liang, Longkang Li, Yanbo Fan, Xiaojun Jia, Jingzhi Li, Baoyuan Wu, Xiaochun Cao
Recent studies have shown that detectors based on deep models are vulnerable to adversarial examples, even in the black-box scenario where the attacker cannot access the model information.
no code implementations • 15 Sep 2022 • ChunYu Sun, Chenye Xu, Chengyuan Yao, Siyuan Liang, Yichao Wu, Ding Liang, Xianglong Liu, Aishan Liu
Adversarial training (AT) methods are effective against adversarial attacks, yet they introduce severe disparity of accuracy and robustness between different classes, known as the robust fairness problem.
no code implementations • 12 Sep 2022 • Yuhang Wang, Huafeng Shi, Rui Min, Ruijia Wu, Siyuan Liang, Yichao Wu, Ding Liang, Aishan Liu
Most detection methods are designed to verify whether a model is infected with presumed types of backdoor attacks, yet the adversary is likely to generate diverse backdoor attacks in practice that are unforeseen to defenders, which challenge current detection strategies.
no code implementations • 30 May 2022 • Siyuan Liang, Hao Wu
Driven by the ever-increasing requirements of autonomous vehicles, such as traffic monitoring and driving assistant, deep learning-based object detection (DL-OD) has been increasingly attractive in intelligent transportation systems.
no code implementations • 23 Jan 2022 • Peiyuan Sun, Long Huo, Siyuan Liang, Xin Chen
Transient stability prediction is critically essential to the fast online assessment and maintaining the stable operation in power systems.
no code implementations • ICCV 2021 • Siyuan Liang, Baoyuan Wu, Yanbo Fan, Xingxing Wei, Xiaochun Cao
Extensive experiments demonstrate that our method can effectively and efficiently attack various popular object detectors, including anchor-based and anchor-free, and generate transferable adversarial examples.
no code implementations • ICML Workshop AML 2021 • Siyuan Liang, Xingxing Wei, Xiaochun Cao
The existing attack methods have the following problems: 1) the training generator takes a long time and is difficult to extend to a large dataset; 2) the excessive destruction of the image features does not improve the black-box attack effect(the generated adversarial examples have poor transferability) and brings about visible perturbations.
no code implementations • ECCV 2020 • Siyuan Liang, Xingxing Wei, Siyuan Yao, Xiaochun Cao
In this paper, we analyze the weakness of object trackers based on the Siamese network and then extend adversarial examples to visual object tracking.
2 code implementations • 30 Nov 2018 • Xingxing Wei, Siyuan Liang, Ning Chen, Xiaochun Cao
Adversarial examples have been demonstrated to threaten many computer vision tasks including object detection.