1 code implementation • 16 Apr 2024 • Chanwoo Bae, Guanhong Tao, Zhuo Zhang, Xiangyu Zhang
As such, analysts often resort to text search techniques to identify existing malware reports based on the symptoms they observe, exploiting the fact that malware samples share a lot of similarity, especially those from the same origin.
no code implementations • 10 Mar 2024 • Zhuo Zhang, Jingyuan Zhang, Jintao Huang, Lizhen Qu, Hongzhi Zhang, Zenglin Xu
Extensive experiments on real-world medical data demonstrate the effectiveness of FedPIT in improving federated few-shot performance while preserving privacy and robustness against data heterogeneity.
1 code implementation • 19 Feb 2024 • Zian Su, Xiangzhe Xu, Ziyang Huang, Zhuo Zhang, Yapeng Ye, Jianjun Huang, Xiangyu Zhang
Our pre-trained model can improve the SOTAs in these tasks from 53% to 64%, 49% to 60%, and 74% to 94%, respectively.
1 code implementation • 8 Feb 2024 • Guangyu Shen, Siyuan Cheng, Kaiyuan Zhang, Guanhong Tao, Shengwei An, Lu Yan, Zhuo Zhang, Shiqing Ma, Xiangyu Zhang
Large Language Models (LLMs) have become prevalent across diverse sectors, transforming human life with their extraordinary reasoning and comprehension abilities.
no code implementations • 25 Jan 2024 • Xiaolong Jin, Zhuo Zhang, Xiangyu Zhang
Given the low cost of our method, we are able to conduct a large scale study regarding LLM alignment issues in different worlds.
no code implementations • 8 Dec 2023 • Zhuo Zhang, Guangyu Shen, Guanhong Tao, Siyuan Cheng, Xiangyu Zhang
Instead, it exploits the fact that even when an LLM rejects a toxic request, a harmful response often hides deep in the output logits.
1 code implementation • 20 Dec 2022 • Zhuo Zhang, Yuanhang Yang, Yong Dai, Lizhen Qu, Zenglin Xu
To facilitate the research of PETuning in FL, we also develop a federated tuning framework FedPETuning, which allows practitioners to exploit different PETuning methods under the FL training paradigm conveniently.
no code implementations • 29 Nov 2022 • Guanhong Tao, Zhenting Wang, Siyuan Cheng, Shiqing Ma, Shengwei An, Yingqi Liu, Guangyu Shen, Zhuo Zhang, Yunshu Mao, Xiangyu Zhang
We leverage 20 different types of injected backdoor attacks in the literature as the guidance and study their correspondences in normally trained models, which we call natural backdoor vulnerabilities.
no code implementations • 17 Sep 2022 • Ye Bai, Jie Li, Wenjing Han, Hao Ni, Kaituo Xu, Zhuo Zhang, Cheng Yi, Xiaorui Wang
Experimental results show that the proposed model achieves competitive performance with 1/3 of the parameters of the encoder, compared with the full-parameter model.
no code implementations • 18 Jun 2022 • Guanhong Tao, Yingqi Liu, Siyuan Cheng, Shengwei An, Zhuo Zhang, QiuLing Xu, Guangyu Shen, Xiangyu Zhang
As such, using the samples derived from our attack in adversarial training can harden a model against these backdoor vulnerabilities.
1 code implementation • 30 May 2022 • Jiachen Yang, Zhuo Zhang, Yicheng Gong, Shukun Ma, Xiaolan Guo, Yue Yang, Shuai Xiao, Jiabao Wen, Yang Li, Xinbo Gao, Wen Lu, Qinggang Meng
Data has now become a shortcoming of deep learning.
no code implementations • 11 Feb 2022 • Guangyu Shen, Yingqi Liu, Guanhong Tao, QiuLing Xu, Zhuo Zhang, Shengwei An, Shiqing Ma, Xiangyu Zhang
We develop a novel optimization method for NLPbackdoor inversion.
no code implementations • 14 Jan 2019 • Mustafa Talha Avcu, Zhuo Zhang, Derrick Wei Shih Chan
We design the SeizNet, a Convolutional Neural Network for seizure detection.