65 papers with code • 0 benchmarks • 0 datasets
These leaderboards are used to track progress in Learning Theory
In this work, we model personalized recommendation of news articles as a contextual bandit problem, a principled approach in which a learning algorithm sequentially selects articles to serve users based on contextual information about the users and articles, while simultaneously adapting its article-selection strategy based on user-click feedback to maximize total user clicks.
This paper introduces a novel measure-theoretic theory for machine learning that does not require statistical assumptions.
Concretely, by exploiting the cosine similarity metric to measure the importance of the connections, our proposed method, Cosine similarity-based and Random Topology Exploration (CTRE), evolves the topology of sparse neural networks by adding the most important connections to the network without calculating dense gradient in the backward.
To analyze the influence of data structure on test loss dynamics, we study an exactly solveable model of stochastic gradient descent (SGD) on mean square loss which predicts test loss when training on features with arbitrary covariance structure.
We use statistical learning theory and experimental analysis to show how multiple tasks can interact with each other in a non-trivial fashion when a single model is trained on them.
Latent Equilibrium: A unified learning theory for arbitrarily fast computation with arbitrarily slow neurons
The response time of physical computational elements is finite, and neurons are no exception.
We propose a unified view of unsupervised non-local methods for image denoising that linearily combine noisy image patches.