Search Results for author: Sanguthevar Rajasekaran

Found 8 papers, 3 papers with code

Sparse Progressive Distillation: Resolving Overfitting under Pretrain-and-Finetune Paradigm

no code implementations ACL 2022 Shaoyi Huang, Dongkuan Xu, Ian E. H. Yen, Yijue Wang, Sung-En Chang, Bingbing Li, Shiyang Chen, Mimi Xie, Sanguthevar Rajasekaran, Hang Liu, Caiwen Ding

Conventional wisdom in pruning Transformer-based language models is that pruning reduces the model expressiveness and thus is more likely to underfit rather than overfit.

Knowledge Distillation

TAG: Gradient Attack on Transformer-based Language Models

1 code implementation Findings (EMNLP) 2021 Jieren Deng, Yijue Wang, Ji Li, Chao Shang, Cao Qin, Hang Liu, Sanguthevar Rajasekaran, Caiwen Ding

In this paper, as the first attempt, we formulate the gradient attack problem on the Transformer-based language models and propose a gradient attack algorithm, TAG, to reconstruct the local training data.

Federated Learning Cryptography and Security

SAPAG: A Self-Adaptive Privacy Attack From Gradients

no code implementations14 Sep 2020 Yijue Wang, Jieren Deng, Dan Guo, Chenghong Wang, Xianrui Meng, Hang Liu, Caiwen Ding, Sanguthevar Rajasekaran

Distributed learning such as federated learning or collaborative learning enables model training on decentralized data from users and only collects local gradients, where data is processed close to its sources for data privacy.

Federated Learning Reconstruction Attack

Against Membership Inference Attack: Pruning is All You Need

no code implementations28 Aug 2020 Yijue Wang, Chenghong Wang, Zigeng Wang, Shanglin Zhou, Hang Liu, Jinbo Bi, Caiwen Ding, Sanguthevar Rajasekaran

The large model size, high computational operations, and vulnerability against membership inference attack (MIA) have impeded deep learning or deep neural networks (DNNs) popularity, especially on mobile devices.

Fraud Detection Inference Attack +2

AutoPrune: Automatic Network Pruning by Regularizing Auxiliary Parameters

1 code implementation NeurIPS 2019 Xia Xiao, Zigeng Wang, Sanguthevar Rajasekaran

Reducing the model redundancy is an important task to deploy complex deep learning models to resource-limited or time-sensitive devices.

Network Pruning

NOVEL AND EFFECTIVE PARALLEL MIX-GENERATOR GENERATIVE ADVERSARIAL NETWORKS

no code implementations ICLR 2018 Xia Xiao, Sanguthevar Rajasekaran

In our model, we propose an adjustment component that collects all the generated data points from the generators, learns the boundary between each pair of generators, and provides error to separate the support of each of the generated distributions.

Cannot find the paper you are looking for? You can Submit a new open access paper.