no code implementations • 31 Jul 2023 • Javed Lindner, David Dahmen, Michael Krämer, Moritz Helias
Using our formalism on a synthetic task and on MNIST we obtain a homogeneous kernel matrix approximation for the learning curve as well as corrections due to data variability which allow the estimation of the generalization properties and exact results for the bounds of the learning curves in the case of infinitely many training data points.