Detecting Oriented Text in Natural Images by Linking Segments

CVPR 2017  ·  Baoguang Shi, Xiang Bai, Serge Belongie ·

Most state-of-the-art text detection methods are specific to horizontal Latin text and are not fast enough for real-time applications. We introduce Segment Linking (SegLink), an oriented text detection method. The main idea is to decompose text into two locally detectable elements, namely segments and links. A segment is an oriented box covering a part of a word or text line; A link connects two adjacent segments, indicating that they belong to the same word or text line. Both elements are detected densely at multiple scales by an end-to-end trained, fully-convolutional neural network. Final detections are produced by combining segments connected by links. Compared with previous methods, SegLink improves along the dimensions of accuracy, speed, and ease of training. It achieves an f-measure of 75.0% on the standard ICDAR 2015 Incidental (Challenge 4) benchmark, outperforming the previous best by a large margin. It runs at over 20 FPS on 512x512 images. Moreover, without modification, SegLink is able to detect long lines of non-Latin text, such as Chinese.

PDF Abstract CVPR 2017 PDF CVPR 2017 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Scene Text Detection ICDAR 2013 SegLink F-Measure 85.3% # 11
Precision 87.7 # 13
Recall 83 # 11
Scene Text Detection ICDAR 2015 WordSup (VGG16-synth-icdar) F-Measure 78.2 # 37
Precision 79.3 # 39
Recall 77.0 # 37
Scene Text Detection MSRA-TD500 SegLink Recall 70 # 17
Precision 86 # 15
F-Measure 77 # 17

Methods


No methods listed for this paper. Add relevant methods here