no code implementations • 10 Nov 2023 • Fereshteh Razmi, Jian Lou, Li Xiong
We also explore the role of different components of DP algorithms in defending against backdoor attacks and will show that PATE is effective against these attacks due to the bagging structure of the teacher models it employs.
1 code implementation • 9 Aug 2021 • Fereshteh Razmi, Li Xiong
Poisoning attacks are a category of adversarial machine learning threats in which an adversary attempts to subvert the outcome of the machine learning systems by injecting crafted data into training data set, thus increasing the machine learning model's test error.