Search Results for author: Zhuotao Liu

Found 5 papers, 2 papers with code

Pencil: Private and Extensible Collaborative Learning without the Non-Colluding Assumption

no code implementations17 Mar 2024 Xuanqi Liu, Zhuotao Liu, Qi Li, Ke Xu, Mingwei Xu

In this paper, we present Pencil, the first private training framework for collaborative learning that simultaneously offers data privacy, model privacy, and extensibility to multiple data providers, without relying on the non-colluding assumption.

Federated Learning Privacy Preserving

Brain-on-Switch: Towards Advanced Intelligent Network Data Plane via NN-Driven Traffic Analysis at Line-Speed

1 code implementation17 Mar 2024 Jinzhu Yan, Haotian Xu, Zhuotao Liu, Qi Li, Ke Xu, Mingwei Xu, Jianping Wu

Many types of NNs (such as Recurrent Neural Network (RNN), and transformers) that are designed to work with sequential data have advantages over tree-based models, because they can take raw network data as input without complex feature computations on the fly.

Defending Against Data Reconstruction Attacks in Federated Learning: An Information Theory Approach

no code implementations2 Mar 2024 Qi Tan, Qi Li, Yi Zhao, Zhuotao Liu, Xiaobing Guo, Ke Xu

According to the channel model, we propose algorithms to constrain the information transmitted in a single round of local training.

Federated Learning

LLMs Can Understand Encrypted Prompt: Towards Privacy-Computing Friendly Transformers

1 code implementation28 May 2023 Xuanqi Liu, Zhuotao Liu

The community explored to build private inference frameworks for transformer-based large language models (LLMs) in a server-client setting, where the server holds the model parameters and the client inputs its private data (or prompt) for inference.

A Hard Label Black-box Adversarial Attack Against Graph Neural Networks

no code implementations21 Aug 2021 Jiaming Mu, Binghui Wang, Qi Li, Kun Sun, Mingwei Xu, Zhuotao Liu

We also evaluate the effectiveness of our attack under two defenses: one is well-designed adversarial graph detector and the other is that the target GNN model itself is equipped with a defense to prevent adversarial graph generation.

Adversarial Attack Graph Classification +2

Cannot find the paper you are looking for? You can Submit a new open access paper.