Search Results for author: Yijue Wang

Found 6 papers, 2 papers with code

Sparse Progressive Distillation: Resolving Overfitting under Pretrain-and-Finetune Paradigm

no code implementations ACL 2022 Shaoyi Huang, Dongkuan Xu, Ian E. H. Yen, Yijue Wang, Sung-En Chang, Bingbing Li, Shiyang Chen, Mimi Xie, Sanguthevar Rajasekaran, Hang Liu, Caiwen Ding

Conventional wisdom in pruning Transformer-based language models is that pruning reduces the model expressiveness and thus is more likely to underfit rather than overfit.

Knowledge Distillation

TAG: Gradient Attack on Transformer-based Language Models

1 code implementation Findings (EMNLP) 2021 Jieren Deng, Yijue Wang, Ji Li, Chao Shang, Cao Qin, Hang Liu, Sanguthevar Rajasekaran, Caiwen Ding

In this paper, as the first attempt, we formulate the gradient attack problem on the Transformer-based language models and propose a gradient attack algorithm, TAG, to reconstruct the local training data.

Federated Learning Cryptography and Security

SAPAG: A Self-Adaptive Privacy Attack From Gradients

no code implementations14 Sep 2020 Yijue Wang, Jieren Deng, Dan Guo, Chenghong Wang, Xianrui Meng, Hang Liu, Caiwen Ding, Sanguthevar Rajasekaran

Distributed learning such as federated learning or collaborative learning enables model training on decentralized data from users and only collects local gradients, where data is processed close to its sources for data privacy.

Federated Learning Reconstruction Attack

Against Membership Inference Attack: Pruning is All You Need

no code implementations28 Aug 2020 Yijue Wang, Chenghong Wang, Zigeng Wang, Shanglin Zhou, Hang Liu, Jinbo Bi, Caiwen Ding, Sanguthevar Rajasekaran

The large model size, high computational operations, and vulnerability against membership inference attack (MIA) have impeded deep learning or deep neural networks (DNNs) popularity, especially on mobile devices.

Fraud Detection Inference Attack +2

Cannot find the paper you are looking for? You can Submit a new open access paper.