The Power of Interpolation: Understanding the Effectiveness of SGD in Modern Over-parametrized Learning

In this paper we aim to formally explain the phenomenon of fast convergence of SGD observed in modern machine learning. The key observation is that most modern learning architectures are over-parametrized and are trained to interpolate the data by driving the empirical loss (classification and regression) close to zero... (read more)

Results in Papers With Code
(↓ scroll down to see all results)