Occam's Ghost

15 Jun 2020  ·  Peter Kövesarki ·

This article applies the principle of Occam's Razor to non-parametric model building of statistical data, by finding a model with the minimal number of bits, leading to an exceptionally effective regularization method for probability density estimators. The idea comes from the fact that likelihood maximization also minimizes the number of bits required to encode a dataset. However, traditional methods overlook that the optimization of model parameters may also inadvertently play the part in encoding data points. The article shows how to extend the bit counting to the model parameters as well, providing the first true measure of complexity for parametric models. Minimizing the total bit requirement of a model of a dataset favors smaller derivatives, smoother probability density function estimates and most importantly, a phase space with fewer relevant parameters. In fact, it is able prune parameters and detect features with small probability at the same time. It is also shown, how it can be applied to any smooth, non-parametric probability density estimator.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here