Search Results for author: Milan Šulc

Found 6 papers, 4 papers with code

DocILE Benchmark for Document Information Localization and Extraction

1 code implementation11 Feb 2023 Štěpán Šimsa, Milan Šulc, Michal Uřičář, Yash Patel, Ahmed Hamdi, Matěj Kocián, Matyáš Skalický, Jiří Matas, Antoine Doucet, Mickaël Coustaty, Dimosthenis Karatzas

This paper introduces the DocILE benchmark with the largest dataset of business documents for the tasks of Key Information Localization and Extraction and Line Item Recognition.

Key Information Extraction Unsupervised Pre-training

DocILE 2023 Teaser: Document Information Localization and Extraction

no code implementations29 Jan 2023 Štěpán Šimsa, Milan Šulc, Matyáš Skalický, Yash Patel, Ahmed Hamdi

The DocILE 2023 competition, hosted as a lab at the CLEF 2023 conference and as an ICDAR 2023 competition, will run the first major benchmark for the tasks of Key Information Localization and Extraction (KILE) and Line Item Recognition (LIR) from business documents.

Information Retrieval Retrieval

Text Detection Forgot About Document OCR

2 code implementations14 Oct 2022 Krzysztof Olejniczak, Milan Šulc

While the state-of-the-art methods for in-the-wild text recognition are typically evaluated on complex scenes, their performance in the domain of documents is typically not published, and a comprehensive comparison with methods for document OCR is missing.

Optical Character Recognition Optical Character Recognition (OCR) +1

Business Document Information Extraction: Towards Practical Benchmarks

no code implementations20 Jun 2022 Matyáš Skalický, Štěpán Šimsa, Michal Uřičář, Milan Šulc

Information extraction from semi-structured documents is crucial for frictionless business-to-business (B2B) communication.

Danish Fungi 2020 -- Not Just Another Image Recognition Dataset

1 code implementation18 Mar 2021 Lukáš Picek, Milan Šulc, Jiří Matas, Jacob Heilmann-Clausen, Thomas S. Jeppesen, Thomas Læssøe, Tobias Frøslev

Interestingly, ViT achieves results superior to CNN baselines with 80. 45% accuracy and 0. 743 macro F1 score, reducing the CNN error by 9% and 12% respectively.

Classifier calibration Fine-Grained Image Classification +2

Cannot find the paper you are looking for? You can Submit a new open access paper.