Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning

1 Apr 2018Matthew JagielskiAlina OpreaBattista BiggioChang LiuCristina Nita-RotaruBo Li

As machine learning becomes widely used for automated decisions, attackers have strong incentives to manipulate the results and models generated by machine learning algorithms. In this paper, we perform the first systematic study of poisoning attacks and their countermeasures for linear regression models... (read more)

PDF Abstract

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.