Learning Theory
65 papers with code • 0 benchmarks • 0 datasets
Learning theory
Benchmarks
These leaderboards are used to track progress in Learning Theory
Most implemented papers
A Contextual-Bandit Approach to Personalized News Article Recommendation
In this work, we model personalized recommendation of news articles as a contextual bandit problem, a principled approach in which a learning algorithm sequentially selects articles to serve users based on contextual information about the users and articles, while simultaneously adapting its article-selection strategy based on user-click feedback to maximize total user clicks.
Generalization in Machine Learning via Analytical Learning Theory
This paper introduces a novel measure-theoretic theory for machine learning that does not require statistical assumptions.
Robust Learning from Untrusted Sources
Modern machine learning methods often require more data for training than a single expert can provide.
A Brain-inspired Algorithm for Training Highly Sparse Neural Networks
Concretely, by exploiting the cosine similarity metric to measure the importance of the connections, our proposed method, Cosine similarity-based and Random Topology Exploration (CTRE), evolves the topology of sparse neural networks by adding the most important connections to the network without calculating dense gradient in the backward.
Foreseeing the Benefits of Incidental Supervision
Real-world applications often require improved models by leveraging a range of cheap incidental supervision signals.
Understanding Boolean Function Learnability on Deep Neural Networks
Computational learning theory states that many classes of boolean formulas are learnable in polynomial time.
Learning Curves for SGD on Structured Features
To analyze the influence of data structure on test loss dynamics, we study an exactly solveable model of stochastic gradient descent (SGD) on mean square loss which predicts test loss when training on features with arbitrary covariance structure.
Model Zoo: A Growing "Brain" That Learns Continually
We use statistical learning theory and experimental analysis to show how multiple tasks can interact with each other in a non-trivial fashion when a single model is trained on them.
Latent Equilibrium: A unified learning theory for arbitrarily fast computation with arbitrarily slow neurons
The response time of physical computational elements is finite, and neurons are no exception.
Towards a unified view of unsupervised non-local methods for image denoising: the NL-Ridge approach
We propose a unified view of unsupervised non-local methods for image denoising that linearily combine noisy image patches.