Search Results for author: Siyuan Liang

Found 32 papers, 11 papers with code

Unlearning Backdoor Threats: Enhancing Backdoor Defense in Multimodal Contrastive Learning via Local Token Unlearning

no code implementations24 Mar 2024 Siyuan Liang, Kuanrong Liu, Jiajun Gong, Jiawei Liang, Yuan Xun, Ee-Chien Chang, Xiaochun Cao

In this paper, we explore the possibility of a less-cost defense from the perspective of model unlearning, that is, whether the model can be made to quickly \textbf{u}nlearn \textbf{b}ackdoor \textbf{t}hreats (UBT) by constructing a small set of poisoned samples.

backdoor defense Contrastive Learning

Object Detectors in the Open Environment: Challenges, Solutions, and Outlook

1 code implementation24 Mar 2024 Siyuan Liang, Wei Wang, Ruoyu Chen, Aishan Liu, Boxi Wu, Ee-Chien Chang, Xiaochun Cao, DaCheng Tao

This paper aims to bridge this gap by conducting a comprehensive review and analysis of object detectors in open environments.

Incremental Learning Object

Hide in Thicket: Generating Imperceptible and Rational Adversarial Perturbations on 3D Point Clouds

1 code implementation8 Mar 2024 Tianrui Lou, Xiaojun Jia, Jindong Gu, Li Liu, Siyuan Liang, Bangyan He, Xiaochun Cao

We find that concealing deformation perturbations in areas insensitive to human eyes can achieve a better trade-off between imperceptibility and adversarial strength, specifically in parts of the object surface that are complex and exhibit drastic curvature changes.

3D Point Cloud Classification Adversarial Attack +1

Semantic Mirror Jailbreak: Genetic Algorithm Based Jailbreak Prompts Against Open-source LLMs

no code implementations21 Feb 2024 Xiaoxia Li, Siyuan Liang, Jiyi Zhang, Han Fang, Aishan Liu, Ee-Chien Chang

Large Language Models (LLMs), used in creative writing, code generation, and translation, generate text based on input sequences but are vulnerable to jailbreak attacks, where crafted prompts induce harmful outputs.

Code Generation Semantic Similarity +1

Poisoned Forgery Face: Towards Backdoor Attacks on Face Forgery Detection

1 code implementation18 Feb 2024 Jiawei Liang, Siyuan Liang, Aishan Liu, Xiaojun Jia, Junhao Kuang, Xiaochun Cao

However, this paper introduces a novel and previously unrecognized threat in face forgery detection scenarios caused by backdoor attack.

Backdoor Attack

Less is More: Fewer Interpretable Region via Submodular Subset Selection

1 code implementation14 Feb 2024 Ruoyu Chen, Hua Zhang, Siyuan Liang, Jingzhi Li, Xiaochun Cao

For incorrectly predicted samples, our method achieves gains of 81. 0% and 18. 4% compared to the HSIC-Attribution algorithm in the average highest confidence and Insertion score respectively.

Interpretability Techniques for Deep Learning

Does Few-shot Learning Suffer from Backdoor Attacks?

no code implementations31 Dec 2023 Xinwei Liu, Xiaojun Jia, Jindong Gu, Yuan Xun, Siyuan Liang, Xiaochun Cao

However, in this paper, we propose the Few-shot Learning Backdoor Attack (FLBA) to show that FSL can still be vulnerable to backdoor attacks.

Backdoor Attack Few-Shot Learning

BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive Learning

no code implementations20 Nov 2023 Siyuan Liang, Mingli Zhu, Aishan Liu, Baoyuan Wu, Xiaochun Cao, Ee-Chien Chang

This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses and introduces the \emph{\toolns} attack, which is resistant to backdoor detection and model fine-tuning defenses.

Backdoor Attack Contrastive Learning

Improving Adversarial Transferability by Stable Diffusion

no code implementations18 Nov 2023 Jiayang Liu, Siyu Zhu, Siyuan Liang, Jie Zhang, Han Fang, Weiming Zhang, Ee-Chien Chang

Various techniques have emerged to enhance the transferability of adversarial attacks for the black-box scenario.

Face Encryption via Frequency-Restricted Identity-Agnostic Attacks

no code implementations11 Aug 2023 Xin Dong, Rui Wang, Siyuan Liang, Aishan Liu, Lihua Jing

As for the weak black-box scenario feasibility, we obverse that representations of the average feature in multiple face recognition models are similar, thus we propose to utilize the average feature via the crawled dataset from the Internet as the target to guide the generation, which is also agnostic to identities of unknown face recognition systems; in nature, the low-frequency perturbations are more visually perceptible by the human vision system.

Face Recognition

Isolation and Induction: Training Robust Deep Neural Networks against Model Stealing Attacks

1 code implementation2 Aug 2023 Jun Guo, Aishan Liu, Xingyu Zheng, Siyuan Liang, Yisong Xiao, Yichao Wu, Xianglong Liu

However, these defenses are now suffering problems of high inference computational overheads and unfavorable trade-offs between benign accuracy and stealing robustness, which challenges the feasibility of deployed models in practice.

Privacy-Enhancing Face Obfuscation Guided by Semantic-Aware Attribution Maps

no code implementations IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY 2023 Jingzhi Li, Hua Zhang, Siyuan Liang, Pengwen Dai, Xiaochun Cao

Within this module, we introduce a pixel importance estimation model based on Shapley value to obtain a pixel-level attribution map, and then each pixel on the attribution map is aggregated into semantic facial parts, which are used to quantify the importance of different facial parts.

Face Recognition

Diversifying the High-level Features for better Adversarial Transferability

2 code implementations20 Apr 2023 Zhiyuan Wang, Zeliang Zhang, Siyuan Liang, Xiaosen Wang

Incorporated into the input transformation-based attacks, DHF generates more transferable adversarial examples and outperforms the baselines with a clear margin when attacking several defense models, showing its generalization to various attacks and high effectiveness for boosting transferability.

Vocal Bursts Intensity Prediction

X-Adv: Physical Adversarial Object Attacks against X-ray Prohibited Item Detection

1 code implementation19 Feb 2023 Aishan Liu, Jun Guo, Jiakai Wang, Siyuan Liang, Renshuai Tao, Wenbo Zhou, Cong Liu, Xianglong Liu, DaCheng Tao

In this paper, we take the first step toward the study of adversarial attacks targeted at X-ray prohibited item detection, and reveal the serious threats posed by such attacks in this safety-critical scenario.

Adversarial Attack

Exploring the Relationship Between Architectural Design and Adversarially Robust Generalization

no code implementations CVPR 2023 Aishan Liu, Shiyu Tang, Siyuan Liang, Ruihao Gong, Boxi Wu, Xianglong Liu, DaCheng Tao

In particular, we comprehensively evaluated 20 most representative adversarially trained architectures on ImageNette and CIFAR-10 datasets towards multiple l_p-norm adversarial attacks.

Learning to Optimize Permutation Flow Shop Scheduling via Graph-based Imitation Learning

1 code implementation31 Oct 2022 Longkang Li, Siyuan Liang, Zihao Zhu, Chris Ding, Hongyuan Zha, Baoyuan Wu

Compared to the state-of-the-art reinforcement learning method, our model's network parameters are reduced to only 37\% of theirs, and the solution gap of our model towards the expert solutions decreases from 6. 8\% to 1. 3\% on average.

Computational Efficiency Imitation Learning +3

SimpleDG: Simple Domain Generalization Baseline without Bells and Whistles

1 code implementation26 Oct 2022 Zhi Lv, Bo Lin, Siyuan Liang, Lihua Wang, Mochen Yu, Yao Tang, Jiajun Liang

We present a simple domain generalization baseline, which wins second place in both the common context generalization track and the hybrid context generalization track respectively in NICO CHALLENGE 2022.

Domain Generalization

Exploring the Relationship between Architecture and Adversarially Robust Generalization

no code implementations28 Sep 2022 Aishan Liu, Shiyu Tang, Siyuan Liang, Ruihao Gong, Boxi Wu, Xianglong Liu, DaCheng Tao

Inparticular, we comprehensively evaluated 20 most representative adversarially trained architectures on ImageNette and CIFAR-10 datasets towards multiple `p-norm adversarial attacks.

Exploring Inconsistent Knowledge Distillation for Object Detection with Data Augmentation

1 code implementation20 Sep 2022 Jiawei Liang, Siyuan Liang, Aishan Liu, Ke Ma, Jingzhi Li, Xiaochun Cao

Specifically, we propose a sample-specific data augmentation to transfer the teacher model's ability in capturing distinct frequency components and suggest an adversarial feature augmentation to extract the teacher model's perceptions of non-robust features in the data.

Data Augmentation Knowledge Distillation +2

A Large-scale Multiple-objective Method for Black-box Attack against Object Detection

no code implementations16 Sep 2022 Siyuan Liang, Longkang Li, Yanbo Fan, Xiaojun Jia, Jingzhi Li, Baoyuan Wu, Xiaochun Cao

Recent studies have shown that detectors based on deep models are vulnerable to adversarial examples, even in the black-box scenario where the attacker cannot access the model information.

object-detection Object Detection

Improving Robust Fairness via Balance Adversarial Training

no code implementations15 Sep 2022 ChunYu Sun, Chenye Xu, Chengyuan Yao, Siyuan Liang, Yichao Wu, Ding Liang, Xianglong Liu, Aishan Liu

Adversarial training (AT) methods are effective against adversarial attacks, yet they introduce severe disparity of accuracy and robustness between different classes, known as the robust fairness problem.

Fairness

Universal Backdoor Attacks Detection via Adaptive Adversarial Probe

no code implementations12 Sep 2022 Yuhang Wang, Huafeng Shi, Rui Min, Ruijia Wu, Siyuan Liang, Yichao Wu, Ding Liang, Aishan Liu

Most detection methods are designed to verify whether a model is infected with presumed types of backdoor attacks, yet the adversary is likely to generate diverse backdoor attacks in practice that are unforeseen to defenders, which challenge current detection strategies.

Scheduling

Edge YOLO: Real-Time Intelligent Object Detection System Based on Edge-Cloud Cooperation in Autonomous Vehicles

no code implementations30 May 2022 Siyuan Liang, Hao Wu

Driven by the ever-increasing requirements of autonomous vehicles, such as traffic monitoring and driving assistant, deep learning-based object detection (DL-OD) has been increasingly attractive in intelligent transportation systems.

Autonomous Driving Cloud Computing +2

Fast Transient Stability Prediction Using Grid-informed Temporal and Topological Embedding Deep Neural Network

no code implementations23 Jan 2022 Peiyuan Sun, Long Huo, Siyuan Liang, Xin Chen

Transient stability prediction is critically essential to the fast online assessment and maintaining the stable operation in power systems.

Time Series Time Series Analysis

Parallel Rectangle Flip Attack: A Query-based Black-box Attack against Object Detection

no code implementations ICCV 2021 Siyuan Liang, Baoyuan Wu, Yanbo Fan, Xingxing Wei, Xiaochun Cao

Extensive experiments demonstrate that our method can effectively and efficiently attack various popular object detectors, including anchor-based and anchor-free, and generate transferable adversarial examples.

Autonomous Driving Image Classification +2

Generate More Imperceptible Adversarial Examples for Object Detection

no code implementations ICML Workshop AML 2021 Siyuan Liang, Xingxing Wei, Xiaochun Cao

The existing attack methods have the following problems: 1) the training generator takes a long time and is difficult to extend to a large dataset; 2) the excessive destruction of the image features does not improve the black-box attack effect(the generated adversarial examples have poor transferability) and brings about visible perturbations.

Object object-detection +1

Efficient Adversarial Attacks for Visual Object Tracking

no code implementations ECCV 2020 Siyuan Liang, Xingxing Wei, Siyuan Yao, Xiaochun Cao

In this paper, we analyze the weakness of object trackers based on the Siamese network and then extend adversarial examples to visual object tracking.

Object Visual Object Tracking +1

Transferable Adversarial Attacks for Image and Video Object Detection

2 code implementations30 Nov 2018 Xingxing Wei, Siyuan Liang, Ning Chen, Xiaochun Cao

Adversarial examples have been demonstrated to threaten many computer vision tasks including object detection.

Generative Adversarial Network Object +2

Cannot find the paper you are looking for? You can Submit a new open access paper.