A Span Extraction Approach for Information Extraction on Visually-Rich Documents

2 Jun 2021  ·  Tuan-Anh D. Nguyen, Hieu M. Vu, Nguyen Hong Son, Minh-Tien Nguyen ·

Information extraction (IE) for visually-rich documents (VRDs) has achieved SOTA performance recently thanks to the adaptation of Transformer-based language models, which shows the great potential of pre-training methods. In this paper, we present a new approach to improve the capability of language model pre-training on VRDs. Firstly, we introduce a new query-based IE model that employs span extraction instead of using the common sequence labeling approach. Secondly, to further extend the span extraction formulation, we propose a new training task that focuses on modelling the relationships among semantic entities within a document. This task enables target spans to be extracted recursively and can be used to pre-train the model or as an IE downstream task. Evaluation on three datasets of popular business documents (invoices, receipts) shows that our proposed method achieves significant improvements compared to existing models. The method also provides a mechanism for knowledge accumulation from multiple downstream IE tasks.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here