Paper

The Adversarial Attack and Detection under the Fisher Information Metric

Many deep learning models are vulnerable to the adversarial attack, i.e., imperceptible but intentionally-designed perturbations to the input can cause incorrect output of the networks. In this paper, using information geometry, we provide a reasonable explanation for the vulnerability of deep learning models... (read more)

Results in Papers With Code
(↓ scroll down to see all results)