Scene Text Recognition with Permuted Autoregressive Sequence Models

14 Jul 2022  ·  Darwin Bautista, Rowel Atienza ·

Context-aware STR methods typically use internal autoregressive (AR) language models (LM). Inherent limitations of AR models motivated two-stage methods which employ an external LM. The conditional independence of the external LM on the input image may cause it to erroneously rectify correct predictions, leading to significant inefficiencies. Our method, PARSeq, learns an ensemble of internal AR LMs with shared weights using Permutation Language Modeling. It unifies context-free non-AR and context-aware AR inference, and iterative refinement using bidirectional context. Using synthetic training data, PARSeq achieves state-of-the-art (SOTA) results in STR benchmarks (91.9% accuracy) and more challenging datasets. It establishes new SOTA results (96.0% accuracy) when trained on real data. PARSeq is optimal on accuracy vs parameter count, FLOPS, and latency because of its simple, unified structure and parallel token processing. Due to its extensive use of attention, it is robust on arbitrarily-oriented text which is common in real-world images. Code, pretrained weights, and data are available at: https://github.com/baudm/parseq.

PDF Abstract

Results from the Paper


Ranked #4 on Scene Text Recognition on IC19-Art (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Scene Text Recognition COCO-Text PARSeq 1:1 Accuracy 79.8±0.1 # 4
Scene Text Recognition CUTE80 PARSeq Accuracy 98.3±0.6 # 7
Scene Text Recognition IC19-Art PARSeq Accuracy (%) 84.5±0.1 # 4
Scene Text Recognition ICDAR2013 PARSeq Accuracy 98.4±0.2 # 5
Scene Text Recognition ICDAR2015 PARSeq Accuracy 89.6±0.3 # 7
Scene Text Recognition IIIT5k PARSeq Accuracy 99.1±0.1 # 5
Scene Text Recognition SVT PARSeq Accuracy 97.9±0.2 # 7
Scene Text Recognition SVTP PARSeq Accuracy 95.7±0.9 # 8

Methods