1 code implementation • 8 Oct 2023 • Yiyong Liu, Michael Backes, Xiao Zhang
We consider availability data poisoning attacks, where an adversary aims to degrade the overall test accuracy of a machine learning model by crafting small perturbations to its training data.
1 code implementation • 31 Aug 2022 • Yiyong Liu, Zhengyu Zhao, Michael Backes, Yang Zhang
Machine learning models are vulnerable to membership inference attacks in which an adversary aims to predict whether or not a particular sample was contained in the target model's training dataset.
no code implementations • 23 Aug 2022 • Zheng Li, Yiyong Liu, Xinlei He, Ning Yu, Michael Backes, Yang Zhang
Furthermore, we propose a hybrid attack that exploits the exit information to improve the performance of existing attacks.