1 code implementation • 24 Sep 2024 • Wei Huang, Yinggui Wang, Cen Chen
In this paper, we present a privacy attack and defense evaluation benchmark in the field of NLP, which includes the conventional/small models and large language models (LLMs).
no code implementations • 20 May 2024 • Zhipeng Wan, Anda Cheng, Yinggui Wang, Lei Wang
To address this issue, we then present Embed Parrot, a Transformer-based method, to reconstruct input from embeddings in deep layers.
1 code implementation • 9 May 2024 • Haoqi Wu, Wenjing Fang, Yancheng Zheng, Junming Ma, Jin Tan, Yinggui Wang, Lei Wang
Then, we propose novel MPC primitives to support the type conversions that are essential in quantization and implement the quantization-aware MPC execution of secure quantized inference.
no code implementations • 22 Mar 2024 • Yinggui Wang, Wei Huang, Le Yang
Thus, the SLU system needs to ensure that a potential malicious attacker cannot deduce the sensitive attributes of the users, while it should avoid greatly compromising the SLU accuracy.
no code implementations • 14 Mar 2024 • Yinggui Wang, Yuanqing Huang, Jianshu Li, Le Yang, Kai Song, Lei Wang
Specifically, face images are masked in the frequency domain using an adaptive MixUp strategy.
no code implementations • 24 Jan 2024 • Yuanqing Huang, Huilong Chen, Yinggui Wang, Lei Wang
To the best of our knowledge, the proposed attack model is the very first in the literature developed for FR models without a classification layer.
no code implementations • 18 Jan 2024 • Wei Huang, Yinggui Wang, Anda Cheng, Aihui Zhou, Chaofan Yu, Lei Wang
In this paper, we propose a secure distributed LLM based on model slicing.
no code implementations • 29 Jul 2023 • Tiandi Ye, Cen Chen, Yinggui Wang, Xiang Li, Ming Gao
To address this challenge, we extend the adaptive risk minimization technique into the unsupervised personalized federated learning setting and propose our method, FedTTA.
1 code implementation • 29 Jul 2023 • Tiandi Ye, Cen Chen, Yinggui Wang, Xiang Li, Ming Gao
The resistance of pFL methods with parameter decoupling is attributed to the heterogeneous classifiers between malicious clients and benign counterparts.