Search Results for author: Guanhong Tao

Found 27 papers, 17 papers with code

Threat Behavior Textual Search by Attention Graph Isomorphism

1 code implementation16 Apr 2024 Chanwoo Bae, Guanhong Tao, Zhuo Zhang, Xiangyu Zhang

As such, analysts often resort to text search techniques to identify existing malware reports based on the symptoms they observe, exploiting the fact that malware samples share a lot of similarity, especially those from the same origin.

Attribute Malware Analysis +2

Rapid Optimization for Jailbreaking LLMs via Subconscious Exploitation and Echopraxia

1 code implementation8 Feb 2024 Guangyu Shen, Siyuan Cheng, Kaiyuan Zhang, Guanhong Tao, Shengwei An, Lu Yan, Zhuo Zhang, Shiqing Ma, Xiangyu Zhang

Large Language Models (LLMs) have become prevalent across diverse sectors, transforming human life with their extraordinary reasoning and comprehension abilities.

Make Them Spill the Beans! Coercive Knowledge Extraction from (Production) LLMs

no code implementations8 Dec 2023 Zhuo Zhang, Guangyu Shen, Guanhong Tao, Siyuan Cheng, Xiangyu Zhang

Instead, it exploits the fact that even when an LLM rejects a toxic request, a harmful response often hides deep in the output logits.

Elijah: Eliminating Backdoors Injected in Diffusion Models via Distribution Shift

1 code implementation27 Nov 2023 Shengwei An, Sheng-Yen Chou, Kaiyuan Zhang, QiuLing Xu, Guanhong Tao, Guangyu Shen, Siyuan Cheng, Shiqing Ma, Pin-Yu Chen, Tsung-Yi Ho, Xiangyu Zhang

Diffusion models (DM) have become state-of-the-art generative models because of their capability to generate high-quality images from noises without adversarial training.

Fusion is Not Enough: Single Modal Attacks on Fusion Models for 3D Object Detection

no code implementations28 Apr 2023 Zhiyuan Cheng, Hongjun Choi, James Liang, Shiwei Feng, Guanhong Tao, Dongfang Liu, Michael Zuzak, Xiangyu Zhang

We argue that the weakest link of fusion models depends on their most vulnerable modality, and propose an attack framework that targets advanced camera-LiDAR fusion-based 3D object detection models through camera-only adversarial attacks.

3D Object Detection Autonomous Driving +2

Detecting Backdoors in Pre-trained Encoders

1 code implementation CVPR 2023 Shiwei Feng, Guanhong Tao, Siyuan Cheng, Guangyu Shen, Xiangzhe Xu, Yingqi Liu, Kaiyuan Zhang, Shiqing Ma, Xiangyu Zhang

We show the effectiveness of our method on image encoders pre-trained on ImageNet and OpenAI's CLIP 400 million image-text pairs.

Self-Supervised Learning

Gradient Shaping: Enhancing Backdoor Attack Against Reverse Engineering

no code implementations29 Jan 2023 Rui Zhu, Di Tang, Siyuan Tang, Guanhong Tao, Shiqing Ma, XiaoFeng Wang, Haixu Tang

Finally, we perform both theoretical and experimental analysis, showing that the GRASP enhancement does not reduce the effectiveness of the stealthy attacks against the backdoor detection methods based on weight analysis, as well as other backdoor mitigation methods without using detection.

Backdoor Attack

BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense

1 code implementation16 Jan 2023 Siyuan Cheng, Guanhong Tao, Yingqi Liu, Shengwei An, Xiangzhe Xu, Shiwei Feng, Guangyu Shen, Kaiyuan Zhang, QiuLing Xu, Shiqing Ma, Xiangyu Zhang

Attack forensics, a critical counter-measure for traditional cyber attacks, is hence of importance for defending model backdoor attacks.

Backdoor Attack

MEDIC: Remove Model Backdoors via Importance Driven Cloning

no code implementations CVPR 2023 QiuLing Xu, Guanhong Tao, Jean Honorio, Yingqi Liu, Shengwei An, Guangyu Shen, Siyuan Cheng, Xiangyu Zhang

It trains the clone model from scratch on a very small subset of samples and aims to minimize a cloning loss that denotes the differences between the activations of important neurons across the two models.

Knowledge Distillation

Backdoor Vulnerabilities in Normally Trained Deep Learning Models

no code implementations29 Nov 2022 Guanhong Tao, Zhenting Wang, Siyuan Cheng, Shiqing Ma, Shengwei An, Yingqi Liu, Guangyu Shen, Zhuo Zhang, Yunshu Mao, Xiangyu Zhang

We leverage 20 different types of injected backdoor attacks in the literature as the guidance and study their correspondences in normally trained models, which we call natural backdoor vulnerabilities.

Data Poisoning

Physical Attack on Monocular Depth Estimation with Optimal Adversarial Patches

1 code implementation11 Jul 2022 Zhiyuan Cheng, James Liang, Hongjun Choi, Guanhong Tao, Zhiwen Cao, Dongfang Liu, Xiangyu Zhang

Experimental results show that our method can generate stealthy, effective, and robust adversarial patches for different target objects and models and achieves more than 6 meters mean depth estimation error and 93% attack success rate (ASR) in object detection with a patch of 1/9 of the vehicle's rear area.

3D Object Detection Autonomous Driving +3

DECK: Model Hardening for Defending Pervasive Backdoors

no code implementations18 Jun 2022 Guanhong Tao, Yingqi Liu, Siyuan Cheng, Shengwei An, Zhuo Zhang, QiuLing Xu, Guangyu Shen, Xiangyu Zhang

As such, using the samples derived from our attack in adversarial training can harden a model against these backdoor vulnerabilities.

An Extractive-and-Abstractive Framework for Source Code Summarization

1 code implementation15 Jun 2022 Weisong Sun, Chunrong Fang, Yuchen Chen, Quanjun Zhang, Guanhong Tao, Tingxu Han, Yifei Ge, Yudu You, Bin Luo

The extractive module in the framework performs a task of extractive code summarization, which takes in the code snippet and predicts important statements containing key factual details.

Code Summarization Machine Translation +2

Code Search based on Context-aware Code Translation

1 code implementation16 Feb 2022 Weisong Sun, Chunrong Fang, Yuchen Chen, Guanhong Tao, Tingxu Han, Quanjun Zhang

We evaluate the effectiveness of our technique, called TranCS, on the CodeSearchNet corpus with 1, 000 queries.

Code Search Code Translation +1

Bounded Adversarial Attack on Deep Content Features

1 code implementation CVPR 2022 QiuLing Xu, Guanhong Tao, Xiangyu Zhang

We propose a novel adversarial attack targeting content features in some deep layer, that is, individual neurons in the layer.

Adversarial Attack

Complex Backdoor Detection by Symmetric Feature Differencing

1 code implementation CVPR 2022 Yingqi Liu, Guangyu Shen, Guanhong Tao, Zhenting Wang, Shiqing Ma, Xiangyu Zhang

Our results on the TrojAI competition rounds 2-4, which have patch backdoors and filter backdoors, show that existing scanners may produce hundreds of false positives (i. e., clean models recognized as trojaned), while our technique removes 78-100% of them with a small increase of false negatives by 0-30%, leading to 17-41% overall accuracy improvement.

Backdoor Scanning for Deep Neural Networks through K-Arm Optimization

1 code implementation9 Feb 2021 Guangyu Shen, Yingqi Liu, Guanhong Tao, Shengwei An, QiuLing Xu, Siyuan Cheng, Shiqing Ma, Xiangyu Zhang

By iteratively and stochastically selecting the most promising labels for optimization with the guidance of an objective function, we substantially reduce the complexity, allowing to handle models with many classes.

D-square-B: Deep Distribution Bound for Natural-looking Adversarial Attack

no code implementations12 Jun 2020 Qiu-Ling Xu, Guanhong Tao, Xiangyu Zhang

We propose a novel technique that can generate natural-looking adversarial examples by bounding the variations induced for internal activation values in some deep layer(s), through a distribution quantile bound and a polynomial barrier loss function.

Adversarial Attack

Attacks Meet Interpretability: Attribute-steered Detection of Adversarial Samples

1 code implementation NeurIPS 2018 Guanhong Tao, Shiqing Ma, Yingqi Liu, Xiangyu Zhang

Results show that our technique can achieve 94% detection accuracy for 7 different kinds of attacks with 9. 91% false positives on benign inputs.

Attribute Face Recognition +1

Cannot find the paper you are looking for? You can Submit a new open access paper.