Defending Against Adversarial Attacks by Suppressing the Largest Eigenvalue of Fisher Information Matrix

13 Sep 2019Chaomin ShenYaxin PengGuixu ZhangJinsong Fan

We propose a scheme for defending against adversarial attacks by suppressing the largest eigenvalue of the Fisher information matrix (FIM). Our starting point is one explanation on the rationale of adversarial examples... (read more)

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper