Search Results for author: Yiyong Liu

Found 3 papers, 2 papers with code

Transferable Availability Poisoning Attacks

1 code implementation8 Oct 2023 Yiyong Liu, Michael Backes, Xiao Zhang

We consider availability data poisoning attacks, where an adversary aims to degrade the overall test accuracy of a machine learning model by crafting small perturbations to its training data.

Contrastive Learning Data Poisoning +1

Membership Inference Attacks by Exploiting Loss Trajectory

1 code implementation31 Aug 2022 Yiyong Liu, Zhengyu Zhao, Michael Backes, Yang Zhang

Machine learning models are vulnerable to membership inference attacks in which an adversary aims to predict whether or not a particular sample was contained in the target model's training dataset.

Knowledge Distillation

Auditing Membership Leakages of Multi-Exit Networks

no code implementations23 Aug 2022 Zheng Li, Yiyong Liu, Xinlei He, Ning Yu, Michael Backes, Yang Zhang

Furthermore, we propose a hybrid attack that exploits the exit information to improve the performance of existing attacks.

Cannot find the paper you are looking for? You can Submit a new open access paper.