Search Results for author: Łukasz Borchmann

Found 9 papers, 4 papers with code

STable: Table Generation Framework for Encoder-Decoder Models

no code implementations8 Jun 2022 Michał Pietruszka, Michał Turski, Łukasz Borchmann, Tomasz Dwojak, Gabriela Pałka, Karolina Szyndler, Dawid Jurkiewicz, Łukasz Garncarek

The output structure of database-like tables, consisting of values structured in horizontal rows and vertical columns identifiable by name, can cover a wide range of NLP tasks.

Joint Entity and Relation Extraction Knowledge Base Population

Going Full-TILT Boogie on Document Understanding with Text-Image-Layout Transformer

no code implementations18 Feb 2021 Rafał Powalski, Łukasz Borchmann, Dawid Jurkiewicz, Tomasz Dwojak, Michał Pietruszka, Gabriela Pałka

We address the challenging problem of Natural Language Comprehension beyond plain-text documents by introducing the TILT neural network architecture which simultaneously learns layout information, visual features, and textual semantics.

 Ranked #1 on Visual Question Answering on DocVQA (using extra training data)

Document Image Classification Visual Question Answering

From Dataset Recycling to Multi-Property Extraction and Beyond

1 code implementation CONLL 2020 Tomasz Dwojak, Michał Pietruszka, Łukasz Borchmann, Jakub Chłędowski, Filip Graliński

This paper investigates various Transformer architectures on the WikiReading Information Extraction and Machine Reading Comprehension dataset.

Machine Reading Comprehension

Successive Halving Top-k Operator

1 code implementation8 Oct 2020 Michał Pietruszka, Łukasz Borchmann, Filip Graliński

We propose a differentiable successive halving method of relaxing the top-k operator, rendering gradient-based optimization possible.

On the Multi-Property Extraction and Beyond

no code implementations15 Jun 2020 Tomasz Dwojak, Michał Pietruszka, Łukasz Borchmann, Filip Graliński, Jakub Chłędowski

In this paper, we investigate the Dual-source Transformer architecture on the WikiReading information extraction and machine reading comprehension dataset.

Machine Reading Comprehension

ApplicaAI at SemEval-2020 Task 11: On RoBERTa-CRF, Span CLS and Whether Self-Training Helps Them

no code implementations SEMEVAL 2020 Dawid Jurkiewicz, Łukasz Borchmann, Izabela Kosmala, Filip Graliński

This paper presents the winning system for the propaganda Technique Classification (TC) task and the second-placed system for the propaganda Span Identification (SI) task.

Propaganda span identification

Cannot find the paper you are looking for? You can Submit a new open access paper.