Search Results for author: Nicolas Carlini

Found 1 papers, 0 papers with code

Adversarial Examples Are a Natural Consequence of Test Error in Noise

no code implementations ICLR 2019 Nic Ford, Justin Gilmer, Nicolas Carlini, Dogus Cubuk

Over the last few years, the phenomenon of adversarial examples --- maliciously constructed inputs that fool trained machine learning models --- has captured the attention of the research community, especially when the adversary is restricted to small modifications of a correctly handled input.

Adversarial Robustness

Cannot find the paper you are looking for? You can Submit a new open access paper.