Multiplicative Reweighting for Robust Neural Network Optimization

24 Feb 2021  ·  Noga Bar, Tomer Koren, Raja Giryes ·

Yet, their performance degrades in the presence of noisy labels at train time. Inspired by the setting of learning with expert advice, where multiplicative weights (MW) updates were recently shown to be robust to moderate data corruptions in expert advice, we propose to use MW for reweighting examples during neural networks optimization. We theoretically establish the convergence of our method when used with gradient descent and prove its advantage for label noise in 1d cases. We then validate empirically our findings for the general case by showing that MW improves neural networks accuracy in the presence of label noise on CIFAR-10, CIFAR-100 and Clothing1M. We also show the impact of our approach on adversarial robustness.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here