Detecting Adversarial Samples from Artifacts

1 Mar 2017Reuben FeinmanRyan R. CurtinSaurabh ShintreAndrew B. Gardner

Deep neural networks (DNNs) are powerful nonlinear architectures that are known to be robust to random perturbations of the input. However, these models are vulnerable to adversarial perturbations--small input changes crafted explicitly to fool the model... (read more)

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.