A general framework for defining and optimizing robustness

19 Jun 2020  ·  Alessandro Tibo, Manfred Jaeger, Kim G. Larsen ·

Robustness of neural networks has recently attracted a great amount of interest. The many investigations in this area lack a precise common foundation of robustness concepts. Therefore, in this paper, we propose a rigorous and flexible framework for defining different types of robustness properties for classifiers. Our robustness concept is based on postulates that robustness of a classifier should be considered as a property that is independent of accuracy, and that it should be defined in purely mathematical terms without reliance on algorithmic procedures for its measurement. We develop a very general robustness framework that is applicable to any type of classification model, and that encompasses relevant robustness concepts for investigations ranging from safety against adversarial attacks to transferability of models to new domains. For two prototypical, distinct robustness objectives we then propose new learning approaches based on neural network co-training strategies for obtaining image classifiers optimized for these respective objectives.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here