On a scalable entropic breaching of the overfitting barrier in machine learning

8 Feb 2020  ·  Illia Horenko ·

Overfitting and treatment of "small data" are among the most challenging problems in the machine learning (ML), when a relatively small data statistics size $T$ is not enough to provide a robust ML fit for a relatively large data feature dimension $D$. Deploying a massively-parallel ML analysis of generic classification problems for different $D$ and $T$, existence of statistically-significant linear overfitting barriers for common ML methods is demonstrated. For example, these results reveal that for a robust classification of bioinformatics-motivated generic problems with the Long Short-Term Memory deep learning classifier (LSTM) one needs in a best case a statistics $T$ that is at least 13.8 times larger then the feature dimension $D$. It is shown that this overfitting barrier can be breached at a $10^{-12}$ fraction of the computational cost by means of the entropy-optimal Scalable Probabilistic Approximations algorithm (eSPA), performing a joint solution of the entropy-optimal Bayesian network inference and feature space segmentation problems. Application of eSPA to experimental single cell RNA sequencing data exhibits a 30-fold classification performance boost when compared to standard bioinformatics tools - and a 7-fold boost when compared to the deep learning LSTM classifier.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods