no code implementations • 4 Feb 2021 • Sonia Sehra, David Flores, George D. Montanez
Using recent machine learning results that present an information-theoretic perspective on underfitting and overfitting, we prove that deciding whether an encodable learning algorithm will always underfit a dataset, even if given unlimited training time, is undecidable.
no code implementations • 12 Oct 2020 • Daniel Bashir, George D. Montanez, Sonia Sehra, Pedro Sandoval Segura, Julius Lauw
We present an information-theoretic framework for understanding overfitting and underfitting in machine learning and prove the formal undecidability of determining whether an arbitrary classification algorithm will overfit a dataset.
no code implementations • 23 Jun 2020 • Jake Williams, Abel Tadesse, Tyler Sam, Huey Sun, George D. Montanez
To address this, we prove several novel results related to transfer learning, showing the need to carefully select which sets of information to transfer and the need for dependence between transferred information and target problems.
no code implementations • 9 Nov 2019 • Julius Lauw, Dominique Macias, Akshay Trikha, Julia Vendemiatti, George D. Montanez
Learning algorithms need bias to generalize and perform better than random guessing.
no code implementations • 13 Jul 2019 • George D. Montanez, Jonathan Hayase, Julius Lauw, Dominique Macias, Akshay Trikha, Julia Vendemiatti
For a given degree of bias towards a fixed target, we show that the proportion of favorable information resources is strictly bounded from above.
no code implementations • 28 Sep 2016 • George D. Montanez
Casting machine learning as a type of search, we demonstrate that the proportion of problems that are favorable for a fixed algorithm is strictly bounded, such that no single algorithm can perform well over a large fraction of them.
no code implementations • 8 Jun 2015 • George D. Montanez, Cosma Rohilla Shalizi
Spatio-temporal data is intrinsically high dimensional, so unsupervised modeling is only feasible if we can exploit structure in the process.