Graph Universal Adversarial Attacks: A Few Bad Actors Ruin Graph Learning Models

12 Feb 2020Xiao ZangYi XieJie ChenBo Yuan

Deep neural networks, while generalize well, are known to be sensitive to small adversarial perturbations. This phenomenon poses severe security threat and calls for in-depth investigation of the robustness of deep learning models... (read more)

PDF Abstract

Code


No code implementations yet. Submit your code now

Tasks


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper