FOTS: Fast Oriented Text Spotting with a Unified Network

Incidental scene text spotting is considered one of the most difficult and valuable challenges in the document analysis community. Most existing methods treat text detection and recognition as separate tasks. In this work, we propose a unified end-to-end trainable Fast Oriented Text Spotting (FOTS) network for simultaneous detection and recognition, sharing computation and visual information among the two complementary tasks. Specially, RoIRotate is introduced to share convolutional features between detection and recognition. Benefiting from convolution sharing strategy, our FOTS has little computation overhead compared to baseline text detection network, and the joint training method learns more generic features to make our method perform better than these two-stage methods. Experiments on ICDAR 2015, ICDAR 2017 MLT, and ICDAR 2013 datasets demonstrate that the proposed method outperforms state-of-the-art methods significantly, which further allows us to develop the first real-time oriented text spotting system which surpasses all previous state-of-the-art results by more than 5% on ICDAR 2015 text spotting task while keeping 22.6 fps.

PDF Abstract CVPR 2018 PDF CVPR 2018 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Text Spotting ICDAR 2015 FOTS F-measure (%) - Strong Lexicon 83.6 # 10
F-measure (%) - Weak Lexicon 74.5 # 17
F-measure (%) - Generic Lexicon 62.2 # 18
Scene Text Detection ICDAR 2015 FOTS MS F-Measure 89.84 # 7
Precision 91.85 # 4
Recall 87.92 # 9
Scene Text Detection ICDAR 2015 FOTS F-Measure 87.99 # 12
Precision 91 # 10
Recall 85.17 # 15
Scene Text Detection ICDAR 2017 MLT FOTS MS Precision 81.86 # 4
Recall 62.3 # 12
F-Measure 70.75% # 11
Scene Text Detection ICDAR 2017 MLT FOTS Precision 80.95 # 6
Recall 57.51 # 13
F-Measure 67.25% # 12

Methods