What Is Wrong With Scene Text Recognition Model Comparisons? Dataset and Model Analysis

Many new proposals for scene text recognition (STR) models have been introduced in recent years. While each claim to have pushed the boundary of the technology, a holistic and fair comparison has been largely missing in the field due to the inconsistent choices of training and evaluation datasets. This paper addresses this difficulty with three major contributions. First, we examine the inconsistencies of training and evaluation datasets, and the performance gap results from inconsistencies. Second, we introduce a unified four-stage STR framework that most existing STR models fit into. Using this framework allows for the extensive evaluation of previously proposed STR modules and the discovery of previously unexplored module combinations. Third, we analyze the module-wise contributions to performance in terms of accuracy, speed, and memory demand, under one consistent set of training and evaluation datasets. Such analyses clean up the hindrance on the current comparisons to understand the performance gain of the existing modules.

PDF Abstract ICCV 2019 PDF ICCV 2019 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Scene Text Recognition SVT Baek et al. Accuracy 87.5 # 29

Results from Other Papers


Task Dataset Model Metric Name Metric Value Rank Source Paper Compare
Scene Text Recognition ICDAR 2003 Baek et al. Accuracy 94.4 # 7
Scene Text Recognition ICDAR2013 Baek et al. Accuracy 92.3 # 30
Scene Text Recognition ICDAR2015 Baek et al. Accuracy 71.8 # 25

Methods


No methods listed for this paper. Add relevant methods here