Decoupled Attention Network for Text Recognition

Text recognition has attracted considerable research interests because of its various applications. The cutting-edge text recognition methods are based on attention mechanisms. However, most of attention methods usually suffer from serious alignment problem due to its recurrency alignment operation, where the alignment relies on historical decoding results. To remedy this issue, we propose a decoupled attention network (DAN), which decouples the alignment operation from using historical decoding results. DAN is an effective, flexible and robust end-to-end text recognizer, which consists of three components: 1) a feature encoder that extracts visual features from the input image; 2) a convolutional alignment module that performs the alignment operation based on visual features from the encoder; and 3) a decoupled text decoder that makes final prediction by jointly using the feature map and attention maps. Experimental results show that DAN achieves state-of-the-art performance on multiple text recognition tasks, including offline handwritten text recognition and regular/irregular scene text recognition.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Handwritten Text Recognition IAM DAN CER 6.4 # 9
WER 19.6 # 3
Scene Text Recognition ICDAR 2003 DAN Accuracy 95.0 # 4
Scene Text Recognition ICDAR2013 DAN Accuracy 93.9 # 18
Scene Text Recognition ICDAR2015 DAN Accuracy 74.5 # 16
Scene Text Recognition SVT DAN Accuracy 89.2 # 20


No methods listed for this paper. Add relevant methods here