Search Results for author: Zhuo Zhang

Found 13 papers, 5 papers with code

Threat Behavior Textual Search by Attention Graph Isomorphism

1 code implementation16 Apr 2024 Chanwoo Bae, Guanhong Tao, Zhuo Zhang, Xiangyu Zhang

As such, analysts often resort to text search techniques to identify existing malware reports based on the symptoms they observe, exploiting the fact that malware samples share a lot of similarity, especially those from the same origin.

Attribute Malware Analysis +2

FedPIT: Towards Privacy-preserving and Few-shot Federated Instruction Tuning

no code implementations10 Mar 2024 Zhuo Zhang, Jingyuan Zhang, Jintao Huang, Lizhen Qu, Hongzhi Zhang, Zenglin Xu

Extensive experiments on real-world medical data demonstrate the effectiveness of FedPIT in improving federated few-shot performance while preserving privacy and robustness against data heterogeneity.

Federated Learning In-Context Learning +1

Rapid Optimization for Jailbreaking LLMs via Subconscious Exploitation and Echopraxia

1 code implementation8 Feb 2024 Guangyu Shen, Siyuan Cheng, Kaiyuan Zhang, Guanhong Tao, Shengwei An, Lu Yan, Zhuo Zhang, Shiqing Ma, Xiangyu Zhang

Large Language Models (LLMs) have become prevalent across diverse sectors, transforming human life with their extraordinary reasoning and comprehension abilities.

MULTIVERSE: Exposing Large Language Model Alignment Problems in Diverse Worlds

no code implementations25 Jan 2024 Xiaolong Jin, Zhuo Zhang, Xiangyu Zhang

Given the low cost of our method, we are able to conduct a large scale study regarding LLM alignment issues in different worlds.

Language Modelling Large Language Model

Make Them Spill the Beans! Coercive Knowledge Extraction from (Production) LLMs

no code implementations8 Dec 2023 Zhuo Zhang, Guangyu Shen, Guanhong Tao, Siyuan Cheng, Xiangyu Zhang

Instead, it exploits the fact that even when an LLM rejects a toxic request, a harmful response often hides deep in the output logits.

When Federated Learning Meets Pre-trained Language Models' Parameter-Efficient Tuning Methods

1 code implementation20 Dec 2022 Zhuo Zhang, Yuanhang Yang, Yong Dai, Lizhen Qu, Zenglin Xu

To facilitate the research of PETuning in FL, we also develop a federated tuning framework FedPETuning, which allows practitioners to exploit different PETuning methods under the FL training paradigm conveniently.

Federated Learning

Backdoor Vulnerabilities in Normally Trained Deep Learning Models

no code implementations29 Nov 2022 Guanhong Tao, Zhenting Wang, Siyuan Cheng, Shiqing Ma, Shengwei An, Yingqi Liu, Guangyu Shen, Zhuo Zhang, Yunshu Mao, Xiangyu Zhang

We leverage 20 different types of injected backdoor attacks in the literature as the guidance and study their correspondences in normally trained models, which we call natural backdoor vulnerabilities.

Data Poisoning

Parameter-Efficient Conformers via Sharing Sparsely-Gated Experts for End-to-End Speech Recognition

no code implementations17 Sep 2022 Ye Bai, Jie Li, Wenjing Han, Hao Ni, Kaituo Xu, Zhuo Zhang, Cheng Yi, Xiaorui Wang

Experimental results show that the proposed model achieves competitive performance with 1/3 of the parameters of the encoder, compared with the full-parameter model.

Knowledge Distillation speech-recognition +1

DECK: Model Hardening for Defending Pervasive Backdoors

no code implementations18 Jun 2022 Guanhong Tao, Yingqi Liu, Siyuan Cheng, Shengwei An, Zhuo Zhang, QiuLing Xu, Guangyu Shen, Xiangyu Zhang

As such, using the samples derived from our attack in adversarial training can harden a model against these backdoor vulnerabilities.

Cannot find the paper you are looking for? You can Submit a new open access paper.