1 code implementation • 5 Dec 2018 • Huangyi Ge, Sze Yiu Chau, Bruno Ribeiro, Ninghui Li
Image classifiers often suffer from adversarial examples, which are generated by strategically adding a small amount of noise to input images to trick classifiers into misclassification.