Search Results for author: Yuwen Pu

Found 9 papers, 2 papers with code

CLIBE: Detecting Dynamic Backdoors in Transformer-based NLP Models

1 code implementation2 Sep 2024 Rui Zeng, Xi Chen, Yuwen Pu, Xuhong Zhang, Tianyu Du, Shouling Ji

CLIBE injects a "few-shot perturbation" into the suspect Transformer model by crafting optimized weight perturbation in the attention layers to make the perturbed model classify a limited number of reference samples as a target label.

Text Classification Text Generation

SUB-PLAY: Adversarial Policies against Partially Observed Multi-Agent Reinforcement Learning Systems

1 code implementation6 Feb 2024 Oubo Ma, Yuwen Pu, Linkang Du, Yang Dai, Ruo Wang, Xiaolei Liu, Yingcai Wu, Shouling Ji

Furthermore, we evaluate three potential defenses aimed at exploring ways to mitigate security threats posed by adversarial policies, providing constructive recommendations for deploying MARL in competitive environments.

Multi-agent Reinforcement Learning

The Risk of Federated Learning to Skew Fine-Tuning Features and Underperform Out-of-Distribution Robustness

no code implementations25 Jan 2024 Mengyao Du, Miao Zhang, Yuwen Pu, Kai Xu, Shouling Ji, Quanjun Yin

To tackle the scarcity and privacy issues associated with domain-specific datasets, the integration of federated learning in conjunction with fine-tuning has emerged as a practical solution.

Diversity Federated Learning +1

MEAOD: Model Extraction Attack against Object Detectors

no code implementations22 Dec 2023 Zeyu Li, Chenghui Shi, Yuwen Pu, Xuhong Zhang, Yu Li, Jinbao Li, Shouling Ji

The widespread use of deep learning technology across various industries has made deep neural network models highly valuable and, as a result, attractive targets for potential attackers.

Active Learning Model extraction +3

Improving the Robustness of Transformer-based Large Language Models with Dynamic Attention

no code implementations29 Nov 2023 Lujia Shen, Yuwen Pu, Shouling Ji, Changjiang Li, Xuhong Zhang, Chunpeng Ge, Ting Wang

Extensive experiments demonstrate that dynamic attention significantly mitigates the impact of adversarial attacks, improving up to 33\% better performance than previous methods against widely-used adversarial attacks.

TextDefense: Adversarial Text Detection based on Word Importance Entropy

no code implementations12 Feb 2023 Lujia Shen, Xuhong Zhang, Shouling Ji, Yuwen Pu, Chunpeng Ge, Xing Yang, Yanghe Feng

TextDefense differs from previous approaches, where it utilizes the target model for detection and thus is attack type agnostic.

Adversarial Text Text Detection

Hijack Vertical Federated Learning Models As One Party

no code implementations1 Dec 2022 Pengyu Qiu, Xuhong Zhang, Shouling Ji, Changjiang Li, Yuwen Pu, Xing Yang, Ting Wang

Vertical federated learning (VFL) is an emerging paradigm that enables collaborators to build machine learning models together in a distributed fashion.

Vertical Federated Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.