Paper

Multiplicative Reweighting for Robust Neural Network Optimization

Yet, their performance degrades in the presence of noisy labels at train time. Inspired by the setting of learning with expert advice, where multiplicative weights (MW) updates were recently shown to be robust to moderate data corruptions in expert advice, we propose to use MW for reweighting examples during neural networks optimization. We theoretically establish the convergence of our method when used with gradient descent and prove its advantage for label noise in 1d cases. We then validate empirically our findings for the general case by showing that MW improves neural networks accuracy in the presence of label noise on CIFAR-10, CIFAR-100 and Clothing1M. We also show the impact of our approach on adversarial robustness.

Results in Papers With Code
(↓ scroll down to see all results)