no code implementations • 12 May 2020 • George Adam, Romain Speciel
Adversarial examples, which are slightly perturbed inputs generated with the aim of fooling a neural network, are known to transfer between models; adversaries which are effective on one model will often fool another.