Learning Universal Adversarial Perturbations with Generative Models

17 Aug 2017Jamie HayesGeorge Danezis

Neural networks are known to be vulnerable to adversarial examples, inputs that have been intentionally perturbed to remain visually similar to the source input, but cause a misclassification. It was recently shown that given a dataset and classifier, there exists so called universal adversarial perturbations, a single perturbation that causes a misclassification when applied to any input... (read more)

PDF Abstract

Results from the Paper


#6 best model for Graph Classification on NCI1 (using extra training data)

     Get a GitHub badge
TASK DATASET MODEL METRIC NAME METRIC VALUE GLOBAL RANK USES EXTRA
TRAINING DATA
LEADERBOARD
Graph Classification NCI1 DUGNN Accuracy 85.50% # 6