Search Results for author: George D. Montanez

Found 7 papers, 0 papers with code

Undecidability of Underfitting in Learning Algorithms

no code implementations4 Feb 2021 Sonia Sehra, David Flores, George D. Montanez

Using recent machine learning results that present an information-theoretic perspective on underfitting and overfitting, we prove that deciding whether an encodable learning algorithm will always underfit a dataset, even if given unlimited training time, is undecidable.

BIG-bench Machine Learning

An Information-Theoretic Perspective on Overfitting and Underfitting

no code implementations12 Oct 2020 Daniel Bashir, George D. Montanez, Sonia Sehra, Pedro Sandoval Segura, Julius Lauw

We present an information-theoretic framework for understanding overfitting and underfitting in machine learning and prove the formal undecidability of determining whether an arbitrary classification algorithm will overfit a dataset.

BIG-bench Machine Learning

Limits of Transfer Learning

no code implementations23 Jun 2020 Jake Williams, Abel Tadesse, Tyler Sam, Huey Sun, George D. Montanez

To address this, we prove several novel results related to transfer learning, showing the need to carefully select which sets of information to transfer and the need for dependence between transferred information and target problems.

Transfer Learning

The Bias-Expressivity Trade-off

no code implementations9 Nov 2019 Julius Lauw, Dominique Macias, Akshay Trikha, Julia Vendemiatti, George D. Montanez

Learning algorithms need bias to generalize and perform better than random guessing.

The Futility of Bias-Free Learning and Search

no code implementations13 Jul 2019 George D. Montanez, Jonathan Hayase, Julius Lauw, Dominique Macias, Akshay Trikha, Julia Vendemiatti

For a given degree of bias towards a fixed target, we show that the proportion of favorable information resources is strictly bounded from above.

The Famine of Forte: Few Search Problems Greatly Favor Your Algorithm

no code implementations28 Sep 2016 George D. Montanez

Casting machine learning as a type of search, we demonstrate that the proportion of problems that are favorable for a fixed algorithm is strictly bounded, such that no single algorithm can perform well over a large fraction of them.

BIG-bench Machine Learning

The LICORS Cabinet: Nonparametric Algorithms for Spatio-temporal Prediction

no code implementations8 Jun 2015 George D. Montanez, Cosma Rohilla Shalizi

Spatio-temporal data is intrinsically high dimensional, so unsupervised modeling is only feasible if we can exploit structure in the process.

Cannot find the paper you are looking for? You can Submit a new open access paper.