A Spectral Perspective of Neural Networks Robustness to Label Noise

1 Jan 2021  ·  Oshrat Bar, Amnon Drory, Raja Giryes ·

Deep networks usually require a massive amount of labeled data for their training. Yet, such data may include some mistakes in the labels. Interestingly, networks have been shown to be robust to such errors. This work uses recent developments in the analysis of neural networks function space to provide an explanation for their robustness. In particular, we relate the smoothness regularization that usually exists in conventional training to attenuation of high frequencies, which mainly characterize noise. By using a connection between the smoothness and the spectral norm of the network weights, we suggest that one may further improve robustness via spectral normalization. Empirical experiments validate our claims and show the advantage of this normalization for classification with label noise.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here