Mask TextSpotter: An End-to-End Trainable Neural Network for Spotting Text with Arbitrary Shapes

Recently, models based on deep neural networks have dominated the fields of scene text detection and recognition. In this paper, we investigate the problem of scene text spotting, which aims at simultaneous text detection and recognition in natural images. An end-to-end trainable neural network model for scene text spotting is proposed. The proposed model, named as Mask TextSpotter, is inspired by the newly published work Mask R-CNN. Different from previous methods that also accomplish text spotting with end-to-end trainable deep neural networks, Mask TextSpotter takes advantage of simple and smooth end-to-end learning procedure, in which precise text detection and recognition are acquired via semantic segmentation. Moreover, it is superior to previous methods in handling text instances of irregular shapes, for example, curved text. Experiments on ICDAR2013, ICDAR2015 and Total-Text demonstrate that the proposed method achieves state-of-the-art results in both scene text detection and end-to-end text recognition tasks.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Scene Text Detection ICDAR 2013 Mask TextSpotter F-Measure 91.7% # 3
Precision 95 # 3
Recall 88.6 # 4
Scene Text Detection ICDAR 2015 Mask TextSpotter F-Measure 86 # 21
Precision 91.6 # 6
Recall 81 # 27
Text Spotting Inverse-Text MaskTextSpotter v2 F-measure (%) - No Lexicon 39.0 # 5
F-measure (%) - Full Lexicon 43.5 # 7
Scene Text Detection Total-Text Mask TextSpotter F-Measure 61.3% # 25
Precision 69 # 22
Recall 55 # 22

Methods