no code implementations • 12 Oct 2020 • Daniel Bashir, George D. Montanez, Sonia Sehra, Pedro Sandoval Segura, Julius Lauw
We present an information-theoretic framework for understanding overfitting and underfitting in machine learning and prove the formal undecidability of determining whether an arbitrary classification algorithm will overfit a dataset.
no code implementations • 23 Dec 2019 • Pedro Sandoval Segura, Julius Lauw, Daniel Bashir, Kinjal Shah, Sonia Sehra, Dominique Macias, George Montanez
Algorithm performance in supervised learning is a combination of memorization, generalization, and luck.
no code implementations • 9 Nov 2019 • Julius Lauw, Dominique Macias, Akshay Trikha, Julia Vendemiatti, George D. Montanez
Learning algorithms need bias to generalize and perform better than random guessing.
no code implementations • 13 Jul 2019 • George D. Montanez, Jonathan Hayase, Julius Lauw, Dominique Macias, Akshay Trikha, Julia Vendemiatti
For a given degree of bias towards a fixed target, we show that the proportion of favorable information resources is strictly bounded from above.