SEED: Semantics Enhanced Encoder-Decoder Framework for Scene Text Recognition

Scene text recognition is a hot research topic in computer vision. Recently, many recognition methods based on the encoder-decoder framework have been proposed, and they can handle scene texts of perspective distortion and curve shape. Nevertheless, they still face lots of challenges like image blur, uneven illumination, and incomplete characters. We argue that most encoder-decoder methods are based on local visual features without explicit global semantic information. In this work, we propose a semantics enhanced encoder-decoder framework to robustly recognize low-quality scene texts. The semantic information is used both in the encoder module for supervision and in the decoder module for initializing. In particular, the state-of-the art ASTER method is integrated into the proposed framework as an exemplar. Extensive experiments demonstrate that the proposed framework is more robust for low-quality text images, and achieves state-of-the-art results on several benchmark datasets.

PDF Abstract CVPR 2020 PDF CVPR 2020 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Optical Character Recognition (OCR) Benchmarking Chinese Text Recognition: Datasets, Baselines, and an Empirical Study SEED Accuracy (%) 61.2 # 6
Scene Text Recognition ICDAR2013 SEED Accuracy 92.8 # 27
Scene Text Recognition ICDAR2015 SEED Accuracy 80 # 17
Scene Text Recognition SVT SEED Accuracy 89.6 # 24

Methods


No methods listed for this paper. Add relevant methods here