Search Results for author: Fereshteh Razmi

Found 2 papers, 1 papers with code

Does Differential Privacy Prevent Backdoor Attacks in Practice?

no code implementations10 Nov 2023 Fereshteh Razmi, Jian Lou, Li Xiong

We also explore the role of different components of DP algorithms in defending against backdoor attacks and will show that PATE is effective against these attacks due to the bagging structure of the teacher models it employs.

Classification Auto-Encoder based Detector against Diverse Data Poisoning Attacks

1 code implementation9 Aug 2021 Fereshteh Razmi, Li Xiong

Poisoning attacks are a category of adversarial machine learning threats in which an adversary attempts to subvert the outcome of the machine learning systems by injecting crafted data into training data set, thus increasing the machine learning model's test error.

BIG-bench Machine Learning Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.