Universal Adversarial Perturbation via Prior Driven Uncertainty Approximation

ICCV 2019 Hong Liu Rongrong Ji Jie Li Baochang Zhang Yue Gao Yongjian Wu Feiyue Huang

Deep learning models have shown their vulnerabilities to universal adversarial perturbations (UAP), which are quasi-imperceptible. Compared to the conventional supervised UAPs that suffer from the knowledge of training data, the data-independent unsupervised UAPs are more applicable... (read more)

PDF Abstract


No code implementations yet. Submit your code now


Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper