Learning Theory

65 papers with code • 0 benchmarks • 0 datasets

Learning theory

Most implemented papers

A Contextual-Bandit Approach to Personalized News Article Recommendation

ray-project/ray 28 Feb 2010

In this work, we model personalized recommendation of news articles as a contextual bandit problem, a principled approach in which a learning algorithm sequentially selects articles to serve users based on contextual information about the users and articles, while simultaneously adapting its article-selection strategy based on user-click feedback to maximize total user clicks.

Generalization in Machine Learning via Analytical Learning Theory

Learning-and-Intelligent-Systems/DualCutout 21 Feb 2018

This paper introduces a novel measure-theoretic theory for machine learning that does not require statistical assumptions.

Robust Learning from Untrusted Sources

NikolaKon1994/Robust-Learning-from-Untrusted-Sources 29 Jan 2019

Modern machine learning methods often require more data for training than a single expert can provide.

A Brain-inspired Algorithm for Training Highly Sparse Neural Networks

zahraatashgahi/ctre 17 Mar 2019

Concretely, by exploiting the cosine similarity metric to measure the importance of the connections, our proposed method, Cosine similarity-based and Random Topology Exploration (CTRE), evolves the topology of sparse neural networks by adding the most important connections to the network without calculating dense gradient in the backward.

Foreseeing the Benefits of Incidental Supervision

CogComp/PABI EMNLP 2021

Real-world applications often require improved models by leveraging a range of cheap incidental supervision signals.

Understanding Boolean Function Learnability on Deep Neural Networks

machine-reasoning-ufrgs/mlbf 13 Sep 2020

Computational learning theory states that many classes of boolean formulas are learnable in polynomial time.

Learning Curves for SGD on Structured Features

google/neural-tangents ICLR 2022

To analyze the influence of data structure on test loss dynamics, we study an exactly solveable model of stochastic gradient descent (SGD) on mean square loss which predicts test loss when training on features with arbitrary covariance structure.

Model Zoo: A Growing "Brain" That Learns Continually

grasp-lyrl/modelzoo_continual 6 Jun 2021

We use statistical learning theory and experimental analysis to show how multiple tasks can interact with each other in a non-trivial fashion when a single model is trained on them.

Towards a unified view of unsupervised non-local methods for image denoising: the NL-Ridge approach

sherbret/nlridge 1 Mar 2022

We propose a unified view of unsupervised non-local methods for image denoising that linearily combine noisy image patches.