Deriving a Quantitative Relationship Between Resolution and Human Classification Error

24 Aug 2019  ·  Josiah I. Clark, Caroline A. Clark ·

For machine learning perception problems, human-level classification performance is used as an estimate of top algorithm performance. Thus, it is important to understand as precisely as possible the factors that impact human-level performance. Knowing this 1) provides a benchmark for model performance, 2) tells a project manager what type of data to obtain for human labelers in order to get accurate labels, and 3) enables ground-truth analysis--largely conducted by humans--to be carried out smoothly. In this empirical study, we explored the relationship between resolution and human classification performance using the MNIST data set down-sampled to various resolutions. The quantitative heuristic we derived could prove useful for predicting machine model performance, predicting data storage requirements, and saving valuable resources in the deployment of machine learning projects. It also has the potential to be used in a wide variety of fields such as remote sensing, medical imaging, scientific imaging, and astronomy.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here