Learning Theory

86 papers with code • 0 benchmarks • 0 datasets

Learning theory

Most implemented papers

A Contextual-Bandit Approach to Personalized News Article Recommendation

ray-project/ray 28 Feb 2010

In this work, we model personalized recommendation of news articles as a contextual bandit problem, a principled approach in which a learning algorithm sequentially selects articles to serve users based on contextual information about the users and articles, while simultaneously adapting its article-selection strategy based on user-click feedback to maximize total user clicks.

Learning a Variational Network for Reconstruction of Accelerated MRI Data

visva89/varnetrecon 3 Apr 2017

Due to its high computational performance, i. e., reconstruction time of 193 ms on a single graphics card, and the omission of parameter tuning once the network is trained, this new approach to image reconstruction can easily be integrated into clinical workflow.

Generalization in Machine Learning via Analytical Learning Theory

Learning-and-Intelligent-Systems/DualCutout 21 Feb 2018

This paper introduces a novel measure-theoretic theory for machine learning that does not require statistical assumptions.

Robust Learning from Untrusted Sources

NikolaKon1994/Robust-Learning-from-Untrusted-Sources 29 Jan 2019

Modern machine learning methods often require more data for training than a single expert can provide.

A Brain-inspired Algorithm for Training Highly Sparse Neural Networks

zahraatashgahi/ctre 17 Mar 2019

Concretely, by exploiting the cosine similarity metric to measure the importance of the connections, our proposed method, Cosine similarity-based and Random Topology Exploration (CTRE), evolves the topology of sparse neural networks by adding the most important connections to the network without calculating dense gradient in the backward.

Foreseeing the Benefits of Incidental Supervision

CogComp/PABI EMNLP 2021

Real-world applications often require improved models by leveraging a range of cheap incidental supervision signals.

Understanding Boolean Function Learnability on Deep Neural Networks: PAC Learning Meets Neurosymbolic Models

machine-reasoning-ufrgs/mlbf 13 Sep 2020

Computational learning theory states that many classes of boolean formulas are learnable in polynomial time.

Reverse Engineering the Neural Tangent Kernel

james-simon/shallow-learning 6 Jun 2021

The development of methods to guide the design of neural networks is an important open challenge for deep learning theory.

Model Zoo: A Growing "Brain" That Learns Continually

grasp-lyrl/modelzoo_continual 6 Jun 2021

We use statistical learning theory and experimental analysis to show how multiple tasks can interact with each other in a non-trivial fashion when a single model is trained on them.