Paper

Interpretable Meta-Measure for Model Performance

Benchmarks for the evaluation of model performance play an important role in machine learning. However, there is no established way to describe and create new benchmarks. What is more, the most common benchmarks use performance measures that share several limitations. For example, the difference in performance for two models has no probabilistic interpretation, there is no reference point to indicate whether they represent a significant improvement, and it makes no sense to compare such differences between data sets. We introduce a new meta-score assessment named Elo-based Predictive Power (EPP) that is built on top of other performance measures and allows for interpretable comparisons of models. The differences in EPP scores have a probabilistic interpretation and can be directly compared between data sets, furthermore, the logistic regression-based design allows for an assessment of ranking fitness based on a deviance statistic. We prove the mathematical properties of EPP and support them with empirical results of a large scale benchmark on 30 classification data sets and a real-world benchmark for visual data. Additionally, we propose a Unified Benchmark Ontology that is used to give a uniform description of benchmarks.

Results in Papers With Code
(↓ scroll down to see all results)