TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models

21 Sep 2021  ·  Minghao Li, Tengchao Lv, Jingye Chen, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei ·

Text recognition is a long-standing research problem for document digitalization. Existing approaches are usually built based on CNN for image understanding and RNN for char-level text generation. In addition, another language model is usually needed to improve the overall accuracy as a post-processing step. In this paper, we propose an end-to-end text recognition approach with pre-trained image Transformer and text Transformer models, namely TrOCR, which leverages the Transformer architecture for both image understanding and wordpiece-level text generation. The TrOCR model is simple but effective, and can be pre-trained with large-scale synthetic data and fine-tuned with human-labeled datasets. Experiments show that the TrOCR model outperforms the current state-of-the-art models on the printed, handwritten and scene text recognition tasks. The TrOCR models and code are publicly available at \url{https://aka.ms/trocr}.

PDF Abstract

Results from the Paper


 Ranked #1 on Handwritten Text Recognition on IAM(line-level) (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Handwritten Text Recognition IAM TrOCR-large 558M CER 2.89 # 3
Handwritten Text Recognition IAM TrOCR-base 334M CER 3.42 # 5
Handwritten Text Recognition IAM TrOCR-small 62M CER 4.22 # 6
Handwritten Text Recognition IAM(line-level) TrOCR Test CER 3.4 # 1
Test WER - # 5
Handwritten Text Recognition LAM(line-level) TrOCR Test CER 3.6 # 5
Test WER 11.6 # 5

Methods