Diverse Gaussian Noise Consistency Regularization for Robustness and Uncertainty Calibration

2 Apr 2021  ·  Theodoros Tsiligkaridis, Athanasios Tsiligkaridis ·

Deep neural networks achieve high prediction accuracy when the train and test distributions coincide. In practice though, various types of corruptions occur which deviate from this setup and cause severe performance degradations. Few methods have been proposed to address generalization in the presence of unforeseen domain shifts. In particular, digital noise corruptions arise commonly in practice during the image acquisition stage and present a significant challenge for current methods. In this paper, we propose a diverse Gaussian noise consistency regularization method for improving robustness of image classifiers under a variety of corruptions while still maintaining high clean accuracy. We derive bounds to motivate and understand the behavior of our Gaussian noise consistency regularization using a local loss landscape analysis. Our approach improves robustness against unforeseen noise corruptions by 4.2-18.4% over adversarial training and other strong diverse data augmentation baselines across several benchmarks. Furthermore, it improves robustness and uncertainty calibration by 3.7% and 5.5%, respectively, against all common corruptions (weather, digital, blur, noise) when combined with state-of-the-art diverse data augmentations.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here