Evaluation Methodology for Attacks Against Confidence Thresholding Models

ICLR 2019 Ian GoodfellowYao QinDavid Berthelot

Current machine learning algorithms can be easily fooled by adversarial examples. One possible solution path is to make models that use confidence thresholding to avoid making mistakes... (read more)

PDF Abstract

Code


No code implementations yet. Submit your code now

Tasks


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.