no code implementations • 15 Dec 2023 • Shengyao Zhang, Mi Zhang, Xudong Pan, Min Yang
To reduce the computation cost and the energy consumption in large language models (LLM), skimming-based acceleration dynamically drops unimportant tokens of the input sequence progressively along layers of the LLM while preserving the tokens of semantic importance.
1 code implementation • 8 Dec 2023 • Huming Qiu, Junjie Sun, Mi Zhang, Xudong Pan, Min Yang
Deep neural networks (DNNs) are susceptible to backdoor attacks, where malicious functionality is embedded to allow attackers to trigger incorrect classifications.
1 code implementation • 1 Nov 2023 • Mi Zhang, Xudong Pan, Min Yang
In this paper, we present JADE, a targeted linguistic fuzzing platform which strengthens the linguistic complexity of seed questions to simultaneously and consistently break a wide range of widely-used LLMs categorized in three groups: eight open-sourced Chinese, six commercial Chinese and four commercial English LLMs.
1 code implementation • 7 Sep 2023 • Yifan Lu, Wenxuan Li, Mi Zhang, Xudong Pan, Min Yang
\textsc{Dehydra}), which effectively erases all ten mainstream black-box watermarks from DNNs, with only limited or even no data dependence.
no code implementations • 17 Mar 2023 • Qifan Xiao, Xudong Pan, Yifan Lu, Mi Zhang, Jiarun Dai, Min Yang
In this paper, we propose a novel plug-and-play defensive module which works by side of a trained LiDAR-based object detector to eliminate forged obstacles where a major proportion of local parts have low objectness, i. e., to what degree it belongs to a real object.
no code implementations • 17 Mar 2023 • Yifan Yan, Xudong Pan, Mi Zhang, Min Yang
Copyright protection for deep neural networks (DNNs) is an urgent need for AI corporations.
no code implementations • 18 Jul 2022 • Xudong Pan, Qifan Xiao, Mi Zhang, Min Yang
To address this design flaw, we propose a simple yet effective security patch for KF-based MOT, the core of which is an adaptive strategy to balance the focus of KF on observations and predictions according to the anomaly index of the observation-prediction deviation, and has certified effectiveness against a generalized hijacking attack model.
no code implementations • 29 Jun 2022 • Xudong Pan, Yifan Yan, Shengyao Zhang, Mi Zhang, Min Yang
In this paper, we present a novel insider attack called Matryoshka, which employs an irrelevant scheduled-to-publish DNN model as a carrier model for covert transmission of multiple secret models which memorize the functionality of private ML data stored in local data centers.
no code implementations • 30 Apr 2022 • Yifan Yan, Xudong Pan, Yining Wang, Mi Zhang, Min Yang
On $9$ state-of-the-art white-box watermarking schemes and a broad set of industry-level DNN architectures, our attack for the first time reduces the embedded identity message in the protected models to be almost random.
no code implementations • 26 Oct 2020 • Xudong Pan, Mi Zhang, Yifan Yan, Jiaming Zhu, Min Yang
Among existing privacy attacks on the gradient of neural networks, \emph{data reconstruction attack}, which reverse engineers the training batch from the gradient, poses a severe threat on the private training data.
no code implementations • 16 Aug 2019 • Ruozi Huang, Mi Zhang, Xudong Pan, Beina Sheng
Style is ubiquitous in our daily language uses, while what is language style to learning machines?
no code implementations • ICML 2018 • Xudong Pan, Mi Zhang, Daizong Ding
Recently, a unified model for image-to-image translation tasks within adversarial learning framework has aroused widespread research interests in computer vision practitioners.