Hilbert-Based Generative Defense for Adversarial Examples

ICCV 2019 Yang Bai Yan Feng Yisen Wang Tao Dai Shu-Tao Xia Yong Jiang

Adversarial perturbations of clean images are usually imperceptible for human eyes, but can confidently fool deep neural networks (DNNs) to make incorrect predictions. Such vulnerability of DNNs raises serious security concerns about their practicability in security-sensitive applications... (read more)

PDF Abstract


No code implementations yet. Submit your code now


Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.