SPROUT: Self-Progressing Robust Training

ICLR 2020 Anonymous

Enhancing model robustness under new and even adversarial environments is a crucial milestone toward building trustworthy and reliable machine learning systems. Current robust training methods such as adversarial training explicitly specify an ``attack'' (e.g., $\ell_{\infty}$-norm bounded perturbation) to generate adversarial examples during model training in order to improve adversarial robustness... (read more)

PDF Abstract

Code


No code implementations yet. Submit your code now

Tasks


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.