Search Results for author: Yinggui Wang

Found 9 papers, 3 papers with code

Privacy Evaluation Benchmarks for NLP Models

1 code implementation24 Sep 2024 Wei Huang, Yinggui Wang, Cen Chen

In this paper, we present a privacy attack and defense evaluation benchmark in the field of NLP, which includes the conventional/small models and large language models (LLMs).

Knowledge Distillation

Information Leakage from Embedding in Large Language Models

no code implementations20 May 2024 Zhipeng Wan, Anda Cheng, Yinggui Wang, Lei Wang

To address this issue, we then present Embed Parrot, a Transformer-based method, to reconstruct input from embeddings in deep layers.

Ditto: Quantization-aware Secure Inference of Transformers upon MPC

1 code implementation9 May 2024 Haoqi Wu, Wenjing Fang, Yancheng Zheng, Junming Ma, Jin Tan, Yinggui Wang, Lei Wang

Then, we propose novel MPC primitives to support the type conversions that are essential in quantization and implement the quantization-aware MPC execution of secure quantized inference.

Quantization

Privacy-Preserving End-to-End Spoken Language Understanding

no code implementations22 Mar 2024 Yinggui Wang, Wei Huang, Le Yang

Thus, the SLU system needs to ensure that a potential malicious attacker cannot deduce the sensitive attributes of the users, while it should avoid greatly compromising the SLU accuracy.

Privacy Preserving speech-recognition +2

Inference Attacks Against Face Recognition Model without Classification Layers

no code implementations24 Jan 2024 Yuanqing Huang, Huilong Chen, Yinggui Wang, Lei Wang

To the best of our knowledge, the proposed attack model is the very first in the literature developed for FR models without a classification layer.

Face Recognition Generative Adversarial Network +3

UPFL: Unsupervised Personalized Federated Learning towards New Clients

no code implementations29 Jul 2023 Tiandi Ye, Cen Chen, Yinggui Wang, Xiang Li, Ming Gao

To address this challenge, we extend the adaptive risk minimization technique into the unsupervised personalized federated learning setting and propose our method, FedTTA.

Knowledge Distillation Personalized Federated Learning

You Can Backdoor Personalized Federated Learning

1 code implementation29 Jul 2023 Tiandi Ye, Cen Chen, Yinggui Wang, Xiang Li, Ming Gao

The resistance of pFL methods with parameter decoupling is attributed to the heterogeneous classifiers between malicious clients and benign counterparts.

Backdoor Attack Meta-Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.