SwinTextSpotter: Scene Text Spotting via Better Synergy between Text Detection and Text Recognition

End-to-end scene text spotting has attracted great attention in recent years due to the success of excavating the intrinsic synergy of the scene text detection and recognition. However, recent state-of-the-art methods usually incorporate detection and recognition simply by sharing the backbone, which does not directly take advantage of the feature interaction between the two tasks. In this paper, we propose a new end-to-end scene text spotting framework termed SwinTextSpotter. Using a transformer encoder with dynamic head as the detector, we unify the two tasks with a novel Recognition Conversion mechanism to explicitly guide text localization through recognition loss. The straightforward design results in a concise framework that requires neither additional rectification module nor character-level annotation for the arbitrarily-shaped text. Qualitative and quantitative experiments on multi-oriented datasets RoIC13 and ICDAR 2015, arbitrarily-shaped datasets Total-Text and CTW1500, and multi-lingual datasets ReCTS (Chinese) and VinText (Vietnamese) demonstrate SwinTextSpotter significantly outperforms existing methods. Code is available at https://github.com/mxin262/SwinTextSpotter.

PDF Abstract CVPR 2022 PDF CVPR 2022 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Text Spotting ICDAR 2015 SwinTextSpotter F-measure (%) - Strong Lexicon 83.9 # 9
F-measure (%) - Weak Lexicon 77.3 # 15
F-measure (%) - Generic Lexicon 70.5 # 12
Text Spotting Inverse-Text SwinTextSpotter F-measure (%) - No Lexicon 55.4 # 3
F-measure (%) - Full Lexicon 67.9 # 3
Text Spotting SCUT-CTW1500 SwinTextSpotter F-measure (%) - No Lexicon 51.8 # 10
F-Measure (%) - Full Lexicon 77.0 # 9
Text Spotting Total-Text SwinTextSpotter F-measure (%) - Full Lexicon 84.1 # 6
F-measure (%) - No Lexicon 74.3 # 8

Methods


No methods listed for this paper. Add relevant methods here