no code implementations • ICLR 2019 • Nic Ford, Justin Gilmer, Nicolas Carlini, Dogus Cubuk
Over the last few years, the phenomenon of adversarial examples --- maliciously constructed inputs that fool trained machine learning models --- has captured the attention of the research community, especially when the adversary is restricted to small modifications of a correctly handled input.